CN110738072A - Living body judgment method and device - Google Patents
Living body judgment method and device Download PDFInfo
- Publication number
- CN110738072A CN110738072A CN201810789284.5A CN201810789284A CN110738072A CN 110738072 A CN110738072 A CN 110738072A CN 201810789284 A CN201810789284 A CN 201810789284A CN 110738072 A CN110738072 A CN 110738072A
- Authority
- CN
- China
- Prior art keywords
- face
- living body
- target
- eye
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000003331 infrared imaging Methods 0.000 claims description 20
- 238000013527 convolutional neural network Methods 0.000 claims description 16
- 230000001815 facial effect Effects 0.000 claims description 13
- 238000001514 detection method Methods 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000002087 whitening effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application provides living body judgment methods and devices.
Description
Technical Field
The application relates to the technical field of computers, in particular to living body judgment methods and devices.
Background
The face recognition technology is branches in the field of biological recognition technology, and compared with technologies such as iris recognition, fingerprint scanning and palm shape scanning, the face recognition technology uses a general camera as a recognition information acquisition device and can complete the recognition process in a non-contact mode.
Disclosure of Invention
In order to overcome the above-mentioned shortcomings in the prior art, the present application aims to provide methods and devices for determining living body, so as to solve or improve the above-mentioned problems.
In order to achieve the above purpose, the embodiments of the present application employ the following technical solutions:
, the present application provides living body judgment methods, applied to an electronic device, the method including:
acquiring a face eye area of a target to be recognized in an infrared scene;
inputting the human face eye region into a pre-configured living body recognition model for recognition to obtain a recognition result;
and judging whether the target to be identified is a living target according to the identification result to obtain a judgment result.
Optionally, the step of acquiring a face eye region of the target to be recognized in the infrared scene includes:
acquiring an infrared imaging image;
carrying out face detection on the infrared imaging image to obtain a face area in the infrared imaging image;
performing face fixed point on the face area to obtain a face fixed point result;
and extracting a face eye region in the face region according to the face fixed point result.
Optionally, the step of extracting a face eye region in the face region according to the face fixed point result includes:
acquiring coordinates of facial key points in the face fixed point result, wherein the coordinates of the facial key points comprise a left eye coordinate, a right eye coordinate and a nose tip coordinate;
generating an eye region rectangular frame in the face region based on the facial keypoint coordinates;
and extracting the area corresponding to the eye area rectangular frame as the face eye area.
Optionally, before the step of acquiring the face-eye area of the target to be recognized in the infrared scene, the method further includes:
configuring the living body recognition model;
the manner of configuring the living body recognition model includes:
acquiring sample data, wherein the sample data comprises a plurality of face and eye area samples;
training the processed convolutional neural network based on the sample data to obtain network parameters meeting conditions, wherein the convolutional neural network comprises two convolutional layers, two pooling layers and full-connection layers;
configuring a living body recognition model based on the network parameters.
Optionally, the step of inputting the human face and eye region into a pre-configured living body recognition model for recognition to obtain a recognition result includes:
and inputting the human face eye region into a pre-configured living body recognition model for recognition to obtain the probability that the target to be recognized is a living body target and the probability that the target to be recognized is a prosthesis target.
Optionally, the step of determining whether the target to be identified is a living target according to the identification result to obtain a determination result includes:
judging whether the probability that the target to be identified is a living target is greater than a preset probability threshold value or not;
if so, determining that the target to be identified is a living target, otherwise, determining that the target to be identified is a prosthesis target.
In a second aspect, embodiments of the present application further provide kinds of living body judgment devices applied to an electronic device, where the devices include:
the acquisition module is used for acquiring a face eye area of a target to be recognized in an infrared scene;
the recognition module is used for inputting the human face eye region into a pre-configured living body recognition model for recognition to obtain a recognition result;
and the judging module is used for judging whether the target to be identified is a living target according to the identification result to obtain a judgment result.
In a third aspect, an embodiment of the present application further provides readable storage media, on which a computer program is stored, the computer program, when executed, implementing the living body judgment method described above.
Compared with the prior art, the method has the following beneficial effects:
according to the living body judgment method and the living body judgment device, the human face eye area of the target to be recognized in the infrared scene is obtained, the human face eye area is input into the pre-configured living body recognition model for recognition, the recognition result is obtained, and then whether the target to be recognized is the living body target or not is judged according to the recognition result, and the judgment result is obtained. Therefore, the method and the device can simply and efficiently judge the living human face, so that the authenticity of the human face can be accurately judged, and the situation that lawless persons copy and deceive the attack on a security system by using the human face is prevented, and the security under a security scene is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and it will be apparent to those skilled in the art that other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic flowchart of a living body determination method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an infrared imaging image provided by an embodiment of the present application;
fig. 3 is a schematic view of a face after face detection according to an embodiment of the present application;
fig. 4 is a schematic view of a face after face pointing according to an embodiment of the present application;
fig. 5 is a schematic diagram of extracted face-eye regions provided in the embodiment of the present application;
fig. 6 is a schematic structural diagram of a convolutional neural network provided in an embodiment of the present application;
FIG. 7 is a functional block diagram of a living body judging apparatus according to an embodiment of the present application;
fig. 8 is a block diagram schematically illustrating a structure of an electronic device according to an embodiment of the present application.
Icon: 100-an electronic device; 110-a bus; 120-a processor; 130-a storage medium; 140-bus interface; 150-a network adapter; 160-a user interface; 200-a living body judgment device; 209-configuration module; 210-an obtaining module; 220-an identification module; 230-a judgment module.
Detailed Description
At present, along with the gradual commercial start of the biometric identification technology in security systems, the biometric identification technology develops towards the trend of automation and unsupervised, but the current face identification technology can identify the identity of face images, but can not accurately distinguish the authenticity of the faces.
The inventor finds that, in the research process, a dual-flow face living body detection based on the CNN is proposed at present, and local features and an overall depth map are extracted from a face picture, and the local features are beneficial to the CNN to distinguish a false face block from a real face space region. In addition, the overall depth map detects whether the input picture has a depth similar to a human face.
The method uses deep learning to judge, but needs to extract multiple patches of the face and then extract local features by using CNN respectively, the computational performance is very expensive, and the method is difficult to use in real time on an embedded device, and uses color pictures, and the distinction between real persons and prostheses is not obvious.
In addition, it is also proposed to use the spectral feature codes on the surface of the material to perform face verification under short-wave infrared so as to distinguish the skin of a real person from other materials. Meanwhile, the SWIR imaging system is provided, which can acquire multispectral images of four wave bands in real time, and uses pulse small-wave-band illumination, so that rapid image acquisition and high-frequency spectrum resolution are required, and the system is independent of ambient light. The above scheme needs an additional short wave transmitting device, and needs a complex auxiliary device, which is very high in cost.
In addition, lip language prompt information is provided for an identified object, at least frames of images of the identified object are collected, whether lip changes are matched with the lip language prompt information or not is detected when the at least frames of images comprise the lip changes, and if the lip changes are matched with the lip language prompt information, the identified object is determined to be a living body.
The above prior art solutions have drawbacks that are the results of practical and careful study of the inventor, and therefore, the discovery process of the above problems and the solutions proposed by the following embodiments of the present application for the above problems should be the contributions of the inventor to the present application in the process of the present application.
The technical solutions in the embodiments of the present application will be described more fully and clearly below with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the embodiments described are some, but not all embodiments of .
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that like reference numerals and letters refer to like items in the following figures, and thus once a item is defined in figures, it need not be further defined and explained by in subsequent figures.
Please refer to fig. 1, which is a flowchart illustrating a living body determining method according to an embodiment of the present disclosure. It should be noted that the living body judgment method provided in the embodiments of the present application is not limited to the specific sequence shown in fig. 1 and described below. The method comprises the following specific steps:
step S210, acquiring a face eye area of the target to be recognized in the infrared scene.
As embodiments, the step S210 can be implemented as follows:
first, an infrared imaging image is acquired. By way of example, the infrared imaging image may be made more efficient in subsequent processing than conventional color images, as shown in fig. 2. Optionally, the infrared imaging image may be captured in advance, may also be captured in real time by an infrared camera, or may also be captured from an external device, which is not limited herein.
And then, carrying out face detection on the infrared imaging image to obtain a face area in the infrared imaging image. The human face area in the infrared imaging image obtained after the human face detection can be seen from fig. 3, the human face in the middle is a living object, and the left and right sides are prosthesis objects.
And then, carrying out face fixed point on the face area to obtain a face fixed point result.
implementation ways, firstly obtaining the coordinates of the key points of the face in the face pointing result, for example, the coordinates of the key points of the face include the left eye coordinates, the right eye coordinates and the nose tip coordinates, which can be expressed as (x) respectively1,y1)(x2,y2)(x3,y3). The face pointing result can be seen in fig. 4.
An eye region rectangle box is then generated in the face region based on the facial keypoint coordinates, optionally, wherein the eye region rectangle box may be determined as follows:
the coordinates of the upper left corner of the eye region rectangular box may be:
max(0,x1-(x2-x1)/2),max(0,y1-(y3-(y1+y2)/2)/2)
the coordinates of the lower right corner of the eye area rectangular box may be:
x2+(x2-x1)/2,y2+(y3-(y1+y2)/2)/2
and finally, on the basis of obtaining the eye region rectangular frame, extracting a region corresponding to the eye region rectangular frame as the human face eye region. The extracted face-eye region can be seen in fig. 5.
It should be noted that the above-mentioned manner of acquiring the face-eye region is only examples of the present embodiment, and those skilled in the art may also adopt any other manners that can extract the face-eye region to implement step S210.
Therefore, based on the design, the human face and eye region in the infrared imaging image can be extracted, and subsequent living body judgment is facilitated.
And step S220, inputting the human face eye region into a pre-configured living body recognition model for recognition, and obtaining a recognition result.
Before describing step S220 by step , a description will be given to a configuration process of the living body recognition model, and in this embodiment, before step S210, the method may further include the following steps:
configuring the living body recognition model;
the manner of configuring the living body recognition model includes:
firstly, sample data is obtained, the sample data comprises a plurality of face eye region samples, and the obtaining mode of the face eye region samples can refer to the specific extraction method of the face eye region, and details are not repeated here.
Then, training the processed convolutional neural network based on the sample data to obtain network parameters meeting conditions, where the convolutional neural network includes two convolutional layers, two pooling layers, and fully-connected layers, and specifically, as shown in fig. 6, this embodiment configures simplified convolutional neural networks, and the convolutional neural networks can be applied to an embedded device while ensuring an effect.
And finally, after the network parameters are obtained, configuring a living body recognition model based on the network parameters until a training end condition is reached, and outputting the living body recognition model, wherein the living body recognition model can be used for subsequent living body recognition.
Therefore, after the face-eye region in step S210 is obtained, the face-eye region is input into the trained living body recognition model for recognition, and the probability that the target to be recognized is a living body target and the probability that the target to be recognized is a prosthesis target can be obtained.
And step S230, judging whether the target to be identified is a living target according to the identification result to obtain a judgment result.
In this embodiment, after obtaining the probability that the target to be recognized is a living body target and the probability that the target to be recognized is a prosthesis target, it may be determined whether the probability that the target to be recognized is a living body target is greater than a preset probability threshold, if the probability that the target to be recognized is a living body target is greater than the preset probability threshold, it is determined that the target to be recognized is a living body target, otherwise, it is determined that the target to be recognized is a prosthesis target. For example, if the probability that the target to be recognized is a living body target is 95%, the preset threshold is 90%, and since 95% is greater than 90%, the target to be recognized is a living body target.
It can be understood that, in implementation, the preset probability threshold may be set according to actual design requirements, and this embodiment does not limit this.
The living body eye under the infrared image has obvious whitening characteristics, the eye area of the prosthesis does not have the characteristics, so that the living body eye can be well distinguished, and the eye area of the photo under the infrared of a person wearing glasses still has the same characteristics, so that the living body eye can be distinguished from the prosthesis under the infrared scene by deep learning, and the living body recognition model can be combined to realize simple and efficient judgment.
, referring to FIG. 7, the present application further provides a living body determining apparatus 200, which may include:
the obtaining module 210 is configured to obtain a face-eye region of the target to be recognized in an infrared scene.
And the recognition module 220 is configured to input the face-eye region into a pre-configured living body recognition model for recognition, so as to obtain a recognition result.
And the judging module 230 is configured to judge whether the target to be identified is a living target according to the identification result, so as to obtain a judgment result.
Optionally, the obtaining module 210 is further configured to obtain an infrared imaging image, perform face detection on the infrared imaging image, obtain a face region in the infrared imaging image, perform face pointing on the face region to obtain a face pointing result, and extract a face eye region in the face region according to the face pointing result.
Optionally, the manner of extracting the face eye region in the face region according to the face fixed point result includes:
acquiring coordinates of facial key points in the face fixed point result, wherein the coordinates of the facial key points comprise a left eye coordinate, a right eye coordinate and a nose tip coordinate;
generating an eye region rectangular frame in the face region based on the facial keypoint coordinates;
and extracting the area corresponding to the eye area rectangular frame as the face eye area.
Further , still referring to FIG. 7, optionally, the apparatus may further include:
a configuration module 209 for configuring the living body recognition model;
the manner of configuring the living body recognition model includes:
acquiring sample data, wherein the sample data comprises a plurality of face and eye area samples;
training the processed convolutional neural network based on the sample data to obtain network parameters meeting conditions, wherein the convolutional neural network comprises two convolutional layers, two pooling layers and full-connection layers;
configuring a living body recognition model based on the network parameters.
It can be understood that, for the specific operation method of each functional module in this embodiment, reference may be made to the detailed description of the corresponding step in the foregoing method embodiment, and no repeated description is provided herein.
, please refer to fig. 8, which is a schematic block diagram of a structure of an electronic device 100 according to an embodiment of the present disclosure, in which the electronic device 100 may capture images of a corresponding area in scenes where living body judgment is needed, for example, the electronic device 100 may be a monitoring device, or any other terminal, such as a mobile phone (mobile phone), a tablet (Pad), a Virtual Reality (VR) terminal, an Augmented Reality (AR) terminal, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned driving (self driving), a wireless terminal in remote medical (remote medical), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation security (transportation security), a wireless terminal in a smart city (smart city), a wireless terminal in a smart home, and the like.
As shown in FIG. 8, electronic device 100 may be implemented by bus 110 in a bus architecture that is -like, bus 110 may include any number of interconnecting buses and bridges that connect various circuits up to , including processor 120, storage medium 130, and bus interface 140, depending on the specific application of electronic device 100 and the overall design constraints, electronic device 100 may optionally be connected via bus 110 using bus interface 140, network adapter 150 may be used to implement signal processing functions at the physical layer of electronic device 100, as well as transmit and receive radio frequency signals via an antenna, user interface 160 may be connected to external devices, such as a keyboard, display, mouse, joystick, etc., bus 110 may also be connected to various other circuits such as timing sources, peripherals, voltage regulators, or power management circuits, which are well known in the art and therefore not described in detail.
Alternatively, electronic device 100 may be configured as a general purpose processing system, such as what is commonly referred to as a chip, including or more microprocessors providing processing functionality, and an external memory providing at least portions of storage medium 130, all of which are connected at through an external bus architecture with other supporting circuitry.
Alternatively, electronic device 100 may be implemented using an ASIC (application specific integrated circuit) having processor 120, bus interface 140, user interface 160, and at least portions of storage medium 130 integrated in a single chip, or electronic device 100 may be implemented using or more FPGAs (field programmable arrays), PLDs (programmable logic devices), controllers, state machines, logic, discrete hardware components, any other suitable circuitry, or any combination of circuitry capable of performing the various functions described throughout this application.
Where processor 120 is responsible for managing the processing of buses 110 and (including executing software stored on storage medium 130), processor 120 may be implemented using or more general-purpose processors and/or special-purpose processors examples of processor 120 include microprocessors, microcontrollers, DSP processors, and other circuitry capable of executing software should be construed synonymously to represent instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
The processor 120 may execute the above-mentioned embodiments, specifically, the storage medium 130 may store the living body judgment device 200 therein, and the processor 120 may be configured to execute the living body judgment device 200.
In summary, according to the living body judgment method and the living body judgment device provided by the embodiment of the application, the human face eye region of the target to be recognized in the infrared scene is obtained, the human face eye region is input into the pre-configured living body recognition model for recognition, the recognition result is obtained, and then whether the target to be recognized is the living body target is judged according to the recognition result, so that the judgment result is obtained. Therefore, the method and the device can simply and efficiently judge the living human face, so that the authenticity of the human face can be accurately judged, and the situation that lawless persons copy and deceive the attack on a security system by using the human face is prevented, and the security under a security scene is improved.
The apparatus and method embodiments described above are illustrative only, as the flow diagrams and block diagrams in the figures represent possible implementations of systems, methods and computer program products according to various embodiments of the present application, and in this regard, each block in the flow diagrams or block diagrams may represent modules, program segments, or portions of code , which comprises or more executable instructions for implementing the specified logical functions.
In addition, each functional module in the embodiments of the present application may be integrated in to form independent parts, or each module may exist separately, or two or more modules may be integrated to form independent parts.
The computer instructions may be stored in a computer readable storage medium, or transmitted from a computer readable storage medium, such as a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus, to another computer readable storage media, such as website sites, computers, servers, or data centers via wire (e.g., coaxial cable, optical fiber, Digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.) to another website sites, computers, servers, or data centers, such as a Solid State Disk, a magnetic storage medium, such as a Solid State Disk, a magnetic Disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises the series of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
It will thus be seen that the embodiments are illustrative and non-limiting in all respects , the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Claims (10)
1, A living body judgment method, applied to an electronic device, the method comprising:
acquiring a face eye area of a target to be recognized in an infrared scene;
inputting the human face eye region into a pre-configured living body recognition model for recognition to obtain a recognition result;
and judging whether the target to be identified is a living target according to the identification result to obtain a judgment result.
2. The living body judgment method according to claim 1, wherein the step of acquiring the face eye area of the target to be recognized in the infrared scene comprises:
acquiring an infrared imaging image;
carrying out face detection on the infrared imaging image to obtain a face area in the infrared imaging image;
performing face fixed point on the face area to obtain a face fixed point result;
and extracting a face eye region in the face region according to the face fixed point result.
3. The living body judgment method according to claim 2, wherein the step of extracting the face eye region in the face region according to the face fixed point result comprises:
acquiring coordinates of facial key points in the face fixed point result, wherein the coordinates of the facial key points comprise a left eye coordinate, a right eye coordinate and a nose tip coordinate;
generating an eye region rectangular frame in the face region based on the facial keypoint coordinates;
and extracting the area corresponding to the eye area rectangular frame as the face eye area.
4. The living body judgment method according to claim 1, wherein before the step of acquiring the face eye area of the target to be recognized in the infrared scene, the method further comprises:
configuring the living body recognition model;
the manner of configuring the living body recognition model includes:
acquiring sample data, wherein the sample data comprises a plurality of face and eye area samples;
training the processed convolutional neural network based on the sample data to obtain network parameters meeting conditions, wherein the convolutional neural network comprises two convolutional layers, two pooling layers and full-connection layers;
configuring a living body recognition model based on the network parameters.
5. The living body judgment method according to claim 1, wherein the step of inputting the human face eye region into a pre-configured living body recognition model for recognition to obtain a recognition result comprises:
and inputting the human face eye region into a pre-configured living body recognition model for recognition to obtain the probability that the target to be recognized is a living body target and the probability that the target to be recognized is a prosthesis target.
6. The living body judgment method according to claim 5, wherein the step of judging whether the target to be identified is a living body target according to the identification result to obtain a judgment result comprises:
judging whether the probability that the target to be identified is a living target is greater than a preset probability threshold value or not;
if so, determining that the target to be identified is a living target, otherwise, determining that the target to be identified is a prosthesis target.
A living body judgment device of kind, applied to an electronic apparatus, comprising:
the acquisition module is used for acquiring a face eye area of a target to be recognized in an infrared scene;
the recognition module is used for inputting the human face eye region into a pre-configured living body recognition model for recognition to obtain a recognition result;
and the judging module is used for judging whether the target to be identified is a living target according to the identification result to obtain a judgment result.
8. The living body judgment device according to claim 7, characterized in that:
the acquisition module is further used for acquiring an infrared imaging image, carrying out face detection on the infrared imaging image, acquiring a face region in the infrared imaging image, carrying out face fixed point on the face region to obtain a face fixed point result, and extracting a face eye region in the face region according to the face fixed point result.
9. The living body judgment device according to claim 8, wherein the manner of extracting the face eye region in the face region according to the face fixed point result comprises:
acquiring coordinates of facial key points in the face fixed point result, wherein the coordinates of the facial key points comprise a left eye coordinate, a right eye coordinate and a nose tip coordinate;
generating an eye region rectangular frame in the face region based on the facial keypoint coordinates;
and extracting the area corresponding to the eye area rectangular frame as the face eye area.
10. The living body judgment device according to claim 7, characterized by further comprising:
a configuration module for configuring the living body recognition model;
the manner of configuring the living body recognition model includes:
acquiring sample data, wherein the sample data comprises a plurality of face and eye area samples;
training the processed convolutional neural network based on the sample data to obtain network parameters meeting conditions, wherein the convolutional neural network comprises two convolutional layers, two pooling layers and full-connection layers;
configuring a living body recognition model based on the network parameters.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810789284.5A CN110738072A (en) | 2018-07-18 | 2018-07-18 | Living body judgment method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810789284.5A CN110738072A (en) | 2018-07-18 | 2018-07-18 | Living body judgment method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110738072A true CN110738072A (en) | 2020-01-31 |
Family
ID=69234261
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810789284.5A Pending CN110738072A (en) | 2018-07-18 | 2018-07-18 | Living body judgment method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110738072A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111428559A (en) * | 2020-02-19 | 2020-07-17 | 北京三快在线科技有限公司 | Method and device for detecting wearing condition of mask, electronic equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103106397A (en) * | 2013-01-19 | 2013-05-15 | 华南理工大学 | Human face living body detection method based on bright pupil effect |
CN105184277A (en) * | 2015-09-29 | 2015-12-23 | 杨晴虹 | Living body human face recognition method and device |
CN105243386A (en) * | 2014-07-10 | 2016-01-13 | 汉王科技股份有限公司 | Face living judgment method and system |
CN106599829A (en) * | 2016-12-09 | 2017-04-26 | 杭州宇泛智能科技有限公司 | Face anti-counterfeiting algorithm based on active near-infrared light |
CN107590473A (en) * | 2017-09-19 | 2018-01-16 | 杭州登虹科技有限公司 | A kind of human face in-vivo detection method, medium and relevant apparatus |
CN107609383A (en) * | 2017-10-26 | 2018-01-19 | 深圳奥比中光科技有限公司 | 3D face identity authentications and device |
CN107862298A (en) * | 2017-11-27 | 2018-03-30 | 电子科技大学 | It is a kind of based on the biopsy method blinked under infrared eye |
CN108009531A (en) * | 2017-12-28 | 2018-05-08 | 北京工业大学 | A kind of face identification method of more tactful antifraud |
CN108108676A (en) * | 2017-12-12 | 2018-06-01 | 北京小米移动软件有限公司 | Face identification method, convolutional neural networks generation method and device |
-
2018
- 2018-07-18 CN CN201810789284.5A patent/CN110738072A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103106397A (en) * | 2013-01-19 | 2013-05-15 | 华南理工大学 | Human face living body detection method based on bright pupil effect |
CN105243386A (en) * | 2014-07-10 | 2016-01-13 | 汉王科技股份有限公司 | Face living judgment method and system |
CN105184277A (en) * | 2015-09-29 | 2015-12-23 | 杨晴虹 | Living body human face recognition method and device |
CN106599829A (en) * | 2016-12-09 | 2017-04-26 | 杭州宇泛智能科技有限公司 | Face anti-counterfeiting algorithm based on active near-infrared light |
CN107590473A (en) * | 2017-09-19 | 2018-01-16 | 杭州登虹科技有限公司 | A kind of human face in-vivo detection method, medium and relevant apparatus |
CN107609383A (en) * | 2017-10-26 | 2018-01-19 | 深圳奥比中光科技有限公司 | 3D face identity authentications and device |
CN107862298A (en) * | 2017-11-27 | 2018-03-30 | 电子科技大学 | It is a kind of based on the biopsy method blinked under infrared eye |
CN108108676A (en) * | 2017-12-12 | 2018-06-01 | 北京小米移动软件有限公司 | Face identification method, convolutional neural networks generation method and device |
CN108009531A (en) * | 2017-12-28 | 2018-05-08 | 北京工业大学 | A kind of face identification method of more tactful antifraud |
Non-Patent Citations (2)
Title |
---|
朱真真等: "基于Kinect的人脸眼部状态实时检测", 《大连民族学院学报》 * |
朱秋煜: "《证件照片的特征提取及检索和压缩研究》", 上海大学出版社 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111428559A (en) * | 2020-02-19 | 2020-07-17 | 北京三快在线科技有限公司 | Method and device for detecting wearing condition of mask, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10769423B2 (en) | Method, system and terminal for identity authentication, and computer readable storage medium | |
ElBadawy et al. | Arabic sign language recognition with 3d convolutional neural networks | |
CN110462633B (en) | Face recognition method and device and electronic equipment | |
CN106709404B (en) | Image processing apparatus and image processing method | |
US20180276487A1 (en) | Method, system, and computer-readable recording medium for long-distance person identification | |
Loke et al. | Indian sign language converter system using an android app | |
CN108781252B (en) | Image shooting method and device | |
CN106991364B (en) | Face recognition processing method and device and mobile terminal | |
CN111339831A (en) | Lighting lamp control method and system | |
CN109684993B (en) | Face recognition method, system and equipment based on nostril information | |
CN114424258A (en) | Attribute identification method and device, storage medium and electronic equipment | |
CN111881813B (en) | Data storage method and system of face recognition terminal | |
CN112507897A (en) | Cross-modal face recognition method, device, equipment and storage medium | |
CN112136140A (en) | Method and apparatus for image recognition | |
Agarwal et al. | Hand gesture recognition using discrete wavelet transform and support vector machine | |
CN112883827B (en) | Method and device for identifying specified target in image, electronic equipment and storage medium | |
CN111680670B (en) | Cross-mode human head detection method and device | |
CN113807166A (en) | Image processing method, device and storage medium | |
CN110738072A (en) | Living body judgment method and device | |
CN111597944B (en) | Living body detection method, living body detection device, computer equipment and storage medium | |
Belhedi et al. | Adaptive scene‐text binarisation on images captured by smartphones | |
CN111814682A (en) | Face living body detection method and device | |
Izadpanahkakhk et al. | Novel mobile palmprint databases for biometric authentication | |
CN114943976B (en) | Model generation method and device, electronic equipment and storage medium | |
Sayed et al. | Real-time dorsal hand recognition based on smartphone |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200131 |