CN113255516A - Living body detection method and device and electronic equipment - Google Patents

Living body detection method and device and electronic equipment Download PDF

Info

Publication number
CN113255516A
CN113255516A CN202110565210.5A CN202110565210A CN113255516A CN 113255516 A CN113255516 A CN 113255516A CN 202110565210 A CN202110565210 A CN 202110565210A CN 113255516 A CN113255516 A CN 113255516A
Authority
CN
China
Prior art keywords
human eye
human
image
face
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110565210.5A
Other languages
Chinese (zh)
Inventor
李亚英
孟春芝
蔡进
王琼瑶
李潇婧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Tianjin Co Ltd
Original Assignee
Spreadtrum Communications Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Tianjin Co Ltd filed Critical Spreadtrum Communications Tianjin Co Ltd
Priority to CN202110565210.5A priority Critical patent/CN113255516A/en
Publication of CN113255516A publication Critical patent/CN113255516A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Abstract

The application provides a living body detection method and device and electronic equipment, and relates to the technical field of computer vision. Wherein, the living body detection method comprises the following steps: first, an image to be detected of an object to be detected can be acquired. Then, the human eye image of the image to be detected can be input into the human eye neural network model to obtain a human eye recognition result. And if the human eye recognition result is the real human eye feature, inputting the human face image of the image to be detected into the human face neural network model to obtain the human face recognition result. And finally, if the face recognition result is a real face, determining that the object to be detected is a living object. The in-vivo detection method can realize in-vivo detection by using the single-frame image, and can further improve the in-vivo detection efficiency on the basis of ensuring the detection accuracy.

Description

Living body detection method and device and electronic equipment
[ technical field ] A method for producing a semiconductor device
The application relates to the technical field of computer vision, in particular to a method and a device for detecting a living body and electronic equipment.
[ background of the invention ]
The near-infrared living body detection technology is one of the important technologies supporting the face recognition system. The near-infrared living body detection technology can judge whether the object to be detected is a living body according to the near-infrared image of the object to be detected, so that illegal attacks such as photos, videos and masks are resisted, and the reliability of a face recognition system is improved.
One of the currently used near-infrared biopsy methods is the optical flow method. The optical flow method is specifically implemented by collecting a plurality of frames of images of an object to be detected, and determining whether the object to be detected is a living body by analyzing the change condition of each pixel of each frame of image. However, the method needs to combine the information of multiple frames of images for judgment, and is time-consuming and not beneficial to improving the detection efficiency.
[ summary of the invention ]
The embodiment of the application provides a living body detection method, a living body detection device and electronic equipment, which can realize living body detection by using a single-frame image, and can further improve the living body detection efficiency on the basis of ensuring the detection accuracy.
In a first aspect, an embodiment of the present application provides a method for detecting a living body, including: acquiring an image to be detected of an object to be detected; inputting the human eye image of the image to be detected into a human eye neural network model to obtain a human eye recognition result; if the human eye recognition result is the real human eye feature, inputting the human face image of the image to be detected into a human face neural network model to obtain a human face recognition result; and if the face recognition result is a real face, determining that the object to be detected is a living object.
In one possible implementation, the step of having the characteristics of the real human eye includes: the method is characterized by comprising any one or more of the following real human eye characteristics: normal real eye characteristics; abnormal-sharpness human eye features; luminance anomaly human eye features; integrity abnormal human eye features.
In one possible implementation manner, the method further includes: and determining the object to be detected as a non-living object according to the human eye recognition result and/or the human face recognition result.
In one possible implementation manner, determining that the object to be detected is a non-living object according to the human eye recognition result and/or the human face recognition result includes: if the human eye recognition result is a pseudo human eye, determining that the object to be detected is a non-living object; or if the face recognition result is a fake face, determining that the object to be detected is a non-living object.
In one possible implementation manner, the method further includes: acquiring a plurality of non-living objects and images to be trained of the living objects; acquiring a human eye training image and a human face training image of each image to be trained; and training an original model by using the human eye training images and the human face training images respectively to obtain the human eye neural network model and the human face neural network model.
In one possible implementation manner, training an original model with each of the human eye training images and the human face training images to obtain the human eye neural network model and the human face neural network model includes: inputting each human eye training image and the corresponding human eye feature label into a first original model for training to obtain the human eye neural network model; respectively inputting each face training image and the corresponding face feature label into a second original model for training to obtain the face neural network model; the human eye feature labels comprise normal real human eyes, abnormal definition human eyes, abnormal brightness human eyes, abnormal integrity human eyes and fake human eyes; the face feature labels comprise real faces and fake faces.
In a second aspect, an embodiment of the present application provides a living body detection apparatus, including: the first acquisition module is used for acquiring an image to be detected of an object to be detected; the first input module is used for inputting the human eye image of the image to be detected into a human eye neural network model to obtain a human eye recognition result; the second input module is used for inputting the face image of the image to be detected into the face neural network model to obtain a face recognition result when the eye recognition result has real eye characteristics; and the determining module is used for determining the object to be detected as a living object when the face recognition result is a real face.
In one possible implementation manner, the apparatus further includes: the second acquisition module is used for acquiring a plurality of non-living objects and images to be trained of the living objects; the acquisition module is used for acquiring the human eye training image and the human face training image of each image to be trained; and the training module is used for training an original model by using the human eye training images and the human face training images respectively to obtain the human eye neural network model and the human face neural network model.
In a third aspect, an embodiment of the present application provides an electronic device, including: at least one processor; and at least one memory communicatively coupled to the processor, wherein: the memory stores program instructions executable by the processor, which when called by the processor are capable of performing the method as described above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing computer instructions that cause the computer to perform the method described above.
In the above technical scheme, first, an image to be detected of an object to be detected is acquired. And then, inputting the human eye image of the image to be detected into the human eye neural network model to obtain a human eye recognition result. And if the human eye recognition result is the real human eye feature, inputting the human face image of the image to be detected into the human face neural network model to obtain the human face recognition result. And finally, if the face recognition result is a real face, determining that the object to be detected is a living object. The in-vivo detection method can realize in-vivo detection by using the single-frame image, and can further improve the in-vivo detection efficiency on the basis of ensuring the detection accuracy.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of a method of in vivo detection provided by an embodiment of the present application;
FIG. 2 is a flow chart of another in vivo detection method provided by an embodiment of the present application;
FIG. 3 is a schematic structural diagram of a living body detecting apparatus according to an embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of another biopsy device according to an embodiment of the present disclosure
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
[ detailed description ] embodiments
For better understanding of the technical solutions of the present application, the following detailed descriptions of the embodiments of the present application are provided with reference to the accompanying drawings.
It should be understood that the embodiments described are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The in-vivo detection method provided by the embodiment of the application can be applied to any in-vivo detection equipment. By executing the living body detection method provided by the embodiment of the application, whether the object to be detected is the living body object can be judged more quickly and accurately. In a scene related to face recognition, the living body detection method provided by the embodiment of the application can effectively resist illegal attacks such as photos, videos and masks, and the reliability of face recognition is improved.
Fig. 1 is a flowchart of a method for detecting a living body according to an embodiment of the present disclosure, and as shown in fig. 1, the method for detecting a living body may include:
step 101, acquiring an image to be detected of an object to be detected.
In the embodiment of the application, the near-infrared camera can be utilized to collect the image to be detected of the object to be detected. Furthermore, whether the object to be detected is a living object can be determined according to the acquired image to be detected. It should be noted that, in the embodiment of the present application, the number of the acquired images to be detected may be a single frame.
And 102, inputting the human eye image of the image to be detected into a human eye neural network model to obtain a human eye recognition result.
In the embodiment of the application, in order to improve the detection speed, a smaller area in the image to be detected can be pre-detected. For example, the embodiment of the application can perform pre-detection on the human eye region of the image to be detected.
Specifically, first, the image to be detected may be preprocessed to obtain a human eye image in the image to be detected. Exemplary, the pre-processing may specifically include: human eye region identification, characteristic point positioning, image cutting and the like.
Then, the human eye image can be input into a human eye neural network model which is trained in advance. The human eye neural network model can learn the human eye image and output a corresponding human eye recognition result.
And 103, judging whether the human eye recognition result has real human eye characteristics. If yes, go to step 104; otherwise step 107 is performed.
According to the difference of the objects to be detected, the human eye recognition result can be a real human eye characteristic or a pseudo human eye.
Wherein, possessing the real human eye characteristics may include: the method has the characteristics of normal and real human eyes, abnormal human eyes with definition, abnormal human eyes with brightness and abnormal human eyes with integrity. The pseudo human eye may include: RGB format image eyes, eye-scratching mask eyes, etc. The eye-picking mask is used for covering the human face mask which does not shield the human eyes on the real human face.
Specifically, the image of the human eye to be detected may be an image with a normal imaging effect, in a possible case. At this time, if the human eye neural network model identifies that the human eye image has the near-infrared imaging characteristic, the identification result can be output as the characteristic of the normal real human eye. On the contrary, if the human eye neural network model identifies that the human eye image does not have the near infrared imaging characteristic, the identification result can be output as a pseudo human eye.
In another possible case, the image of the human eye to be detected may be an image with abnormal imaging effect. Illustratively, there may be an abnormal sharpness, an excessively bright or dark brightness, a glistening of the glasses, a closed eye, an incomplete eye, etc. At this time, the human eye neural network model may not clearly recognize the near-infrared imaging characteristics of the human eye image. In order to avoid the false recognition of the real human eyes as the false human eyes, the human eye neural network model can default that the corresponding human eye images have the characteristics of the real human eyes. Then, according to the difference of the abnormal scenes, the corresponding output human eye recognition result may be: the method has the eye characteristics of abnormal definition, the eye characteristics of abnormal brightness or the eye characteristics of abnormal integrity.
In the embodiment of the application, if the human eye recognition result is the real human eye feature, at this time, because the human eye region area is small and the included features are small, errors may exist in the human eye recognition result output by the human eye neural network model. Especially, in the case of abnormality such as an abnormality in definition of human eyes or an abnormality in brightness, it is difficult to accurately determine whether an object to be detected is a living object by using only the human eye region. Based on this, in order to improve the accuracy of the in-vivo detection, when the human eye recognition result is the real human eye feature, the embodiment of the present application will continue to execute step 104, so as to implement further detection.
For convenience of understanding, a specific scenario is taken as an example for explanation. It is assumed that the object to be detected is a near-infrared image of a living object. The near-infrared image refers to an image captured by a near-infrared camera. At this time, the imaging characteristics of the object to be detected and the imaging characteristics of the living object have high similarity, and accurate judgment is difficult to realize only by the human eye region. Therefore, the eye neural network model may erroneously recognize the human eyes of the object to be detected as having the characteristics of the real human eyes.
If the result of the eye recognition is a pseudo eye, then, in order to increase the detection speed, the subsequent detection process may be omitted, and step 107 is executed to determine that the object to be detected is a non-living object.
And 104, inputting the face image of the image to be detected into a face neural network model to obtain a face recognition result.
Specifically, firstly, the image to be detected may be preprocessed to obtain a face image in the image to be detected. Then, the face image can be input into a pre-trained face neural network model. The human face neural network model can learn the human face image and output a corresponding human face recognition result.
In the embodiment of the application, the human face area is larger than the human eye area, and the included features are more, so that the reliability of the living body detection can be improved to a great extent by utilizing the human face neural network model for further detection.
And step 105, determining whether the face recognition result is a real face. If yes, go to step 106; otherwise step 107 is performed.
And 106, determining the object to be detected as a living object.
In the embodiment of the application, if the face recognition result output by the face neural network model is a real face, the object to be detected can be determined to be a living object.
And step 107, determining that the object to be detected is a non-living object.
In the embodiment of the application, if the human eye recognition result output by the human eye neural network model is a pseudo human eye, or if the human face recognition result output by the human face neural network model is a pseudo human face, the object to be detected can be determined to be a non-living object.
The pseudo face may include: RGB format image face and near infrared image face, etc.
In the embodiment of the application, firstly, an image to be detected of an object to be detected can be acquired. Then, the human eye image of the image to be detected can be input into the human eye neural network model to obtain a human eye recognition result. And if the human eye recognition result is the real human eye feature, inputting the human face image of the image to be detected into the human face neural network model to obtain the human face recognition result. And finally, if the face recognition result is a real face, determining that the object to be detected is a living object. The in-vivo detection method can realize in-vivo detection by using the single-frame image, and can further improve the in-vivo detection efficiency on the basis of ensuring the detection accuracy.
Fig. 2 is a flowchart of another in-vivo detection method provided in the embodiment of the present application, and as shown in fig. 2, the embodiment of the present application may further include:
step 201, a plurality of non-living objects and images to be trained of living objects are acquired.
In the embodiment of the application, a near infrared camera can be utilized to acquire images to be trained of a large number of living objects and non-living objects. The acquired images to be trained may comprise various different types. Examples may include: normal images of living objects, images with abnormal definition, images with abnormal brightness, images with abnormal integrity, images of non-living objects such as RGB format images, near infrared images, images of eye-matting masks and the like.
Step 202, obtaining the human eye training image and the human face training image of each image to be trained.
In the embodiment of the application, each image to be trained can be preprocessed, a human eye region is obtained and used as a human eye training image, and a human face region is obtained and used as a human face training image.
And step 203, training the original model by using each human eye training image and each human face training image respectively to obtain a human eye neural network model and a human face neural network model.
In the embodiment of the application, firstly, the feature labels of each eye training image and each face training image can be respectively determined according to the imaging characteristics of each eye training image and each face training image.
Then, each human eye training image and the corresponding human eye feature label can be respectively input into the first original model, and the first original model is trained. The first original model can learn the input human eye training image and the corresponding human eye characteristic label to obtain a human eye neural network model. And respectively inputting each face training image and the corresponding face feature label into the second original model, and training the second original model. The second original model can learn the input face training image and the corresponding face feature label to obtain a face neural network model.
In an embodiment of the present application, the human eye feature tag may include: normal real eyes, abnormal definition eyes, abnormal brightness eyes, abnormal integrity eyes, false eyes and the like.
The face feature tag may include, for example: real faces, fake faces, and the like.
In the embodiment of the application, the original model can be trained by using various types of human eye training images to obtain the human eye neural network model. And training the original model by using the human face training images of various categories to obtain a human face neural network model. The two neural network models can accurately identify various types of images to be detected, the near-infrared living body detection precision can be effectively improved, and the defense capability to illegal attacks such as photos, videos and masks is improved.
Fig. 3 is a schematic structural diagram of a living body detection apparatus according to an embodiment of the present disclosure. The living body detection device in the present embodiment can implement the living body detection method provided in the present embodiment as a living body detection apparatus. As shown in fig. 3, the above-described living body detecting apparatus may include: a first acquisition module 31, a first input module 32, a second input module 33, and a determination module 34.
The first acquisition module 31 is configured to acquire an image to be detected of the object to be detected.
The first input module 32 is configured to input a human eye image of the image to be detected into the human eye neural network model, so as to obtain a human eye recognition result.
And the second input module 33 is configured to input the face image of the image to be detected into the face neural network model to obtain a face recognition result when the eye recognition result is that the eye recognition result has the real eye features.
And the determining module 34 is configured to determine that the object to be detected is a living object when the face recognition result is a real face.
In a specific implementation, the method for characterizing the real human eye includes: the method is characterized by comprising any one or more of the following real human eye characteristics: normal real eye characteristics; abnormal-sharpness human eye features; luminance anomaly human eye features; integrity abnormal human eye features.
In a specific implementation manner, the determining module 34 is further configured to determine that the object to be detected is a non-living object according to a human eye recognition result and/or a human face recognition result.
In a specific implementation manner, the determining module 34 is specifically configured to determine that the object to be detected is a non-living object when the human eye recognition result is a pseudo human eye. And when the face recognition result is a fake face, determining that the object to be detected is a non-living object.
In this embodiment, first, the first collecting module 31 may collect an image to be detected of an object to be detected. Then, the first input module 32 may input the human eye image of the image to be detected into the human eye neural network model to obtain a human eye recognition result. If the result of the eye recognition is the real eye feature, the second input module 33 can input the face image of the image to be detected into the face neural network model to obtain the face recognition result. Finally, if the face recognition result is a real face, the determination module 34 may determine that the object to be detected is a living object. The in-vivo detection method can realize in-vivo detection by using the single-frame image, and can further improve the in-vivo detection efficiency on the basis of ensuring the detection accuracy.
FIG. 4 is a schematic structural diagram of another biopsy device according to an embodiment of the present disclosure. The living body detecting device in the present embodiment may further include: a second acquisition module 35, an acquisition module 36, and a training module 37.
A second acquisition module 35, configured to acquire a plurality of non-living objects and images to be trained of living objects.
And the obtaining module 36 is configured to obtain the human eye training image and the human face training image of each image to be trained.
And the training module 37 is configured to train the original model with each human eye training image and each human face training image, respectively, to obtain a human eye neural network model and a human face neural network model.
In a specific implementation, the training module 37 is specifically configured to: and respectively inputting each human eye training image and the corresponding human eye feature label into the first original model for training to obtain a human eye neural network model. And respectively inputting each face training image and the corresponding face feature label into a second original model for training to obtain a face neural network model. The human eye feature labels comprise normal real human eyes, abnormal definition human eyes, abnormal brightness human eyes, abnormal integrity human eyes and fake human eyes. The face feature labels include real faces and fake faces.
In the embodiment of the present application, first, the second acquisition module 35 may acquire a plurality of non-living objects and images to be trained of living objects. The acquisition module 36 may then acquire the eye training images and the face training images of each image to be trained. Finally, the training module 37 can respectively use each human eye training image and human face training image to train the original model, so as to obtain a human eye neural network model and a human face neural network model. The two neural network models obtained in the embodiment of the application can accurately identify various types of images to be detected, can effectively improve the detection precision of the near-infrared living body, and improve the defense capability against illegal attacks such as photos, videos and masks.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 5, the electronic device may include at least one processor; and at least one memory communicatively coupled to the processor, wherein: the memory stores program instructions executable by the processor, and the processor calls the program instructions to execute the living body detection method provided by the embodiment of the application.
The electronic device may be a living body detection device, and the embodiment does not limit the specific form of the electronic device.
FIG. 5 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present application. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 5, the electronic device is in the form of a general purpose computing device. Components of the electronic device may include, but are not limited to: one or more processors 410, a memory 430, and a communication bus 440 that connects the various system components (including the memory 430 and the processors 410).
Communication bus 440 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. These architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Electronic devices typically include a variety of computer system readable media. Such media may be any available media that is accessible by the electronic device and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 430 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) and/or cache Memory. The electronic device may further include other removable/non-removable, volatile/nonvolatile computer system storage media. Although not shown in FIG. 5, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk Read Only Memory (CD-ROM), a Digital versatile disk Read Only Memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to the communication bus 440 by one or more data media interfaces. Memory 430 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility having a set (at least one) of program modules, including but not limited to an operating system, one or more application programs, other program modules, and program data, may be stored in memory 430, each of which examples or some combination may include an implementation of a network environment. The program modules generally perform the functions and/or methodologies of the embodiments described herein.
The electronic device may also communicate with one or more external devices (e.g., keyboard, pointing device, display, etc.), one or more devices that enable a user to interact with the electronic device, and/or any devices (e.g., network card, modem, etc.) that enable the electronic device to communicate with one or more other computing devices. Such communication may occur via communication interface 420. Furthermore, the electronic device may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet) via a Network adapter (not shown in FIG. 5) that may communicate with other modules of the electronic device via the communication bus 440. It should be appreciated that although not shown in FIG. 5, other hardware and/or software modules may be used in conjunction with the electronic device, including but not limited to: microcode, device drivers, Redundant processing units, external disk drive Arrays, disk array (RAID) systems, tape Drives, and data backup storage systems, among others.
The processor 410 executes various functional applications and data processing, such as implementing the living body detection method provided by the embodiment of the present application, by executing the program stored in the memory 430.
An embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores computer instructions, and the computer instructions enable the computer to execute the living body detection method provided in the embodiment of the present application.
The computer-readable storage medium described above may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a flash Memory, an optical fiber, a portable compact disc Read Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It should be noted that the terminal according to the embodiments of the present application may include, but is not limited to, a Personal Computer (Personal Computer; hereinafter, referred to as PC), a Personal Digital Assistant (Personal Digital Assistant; hereinafter, referred to as PDA), a wireless handheld device, a Tablet Computer (Tablet Computer), a mobile phone, an MP3 player, an MP4 player, and the like.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (10)

1. A method of in vivo detection, comprising:
acquiring an image to be detected of an object to be detected;
inputting the human eye image of the image to be detected into a human eye neural network model to obtain a human eye recognition result;
if the human eye recognition result is the real human eye feature, inputting the human face image of the image to be detected into a human face neural network model to obtain a human face recognition result;
and if the face recognition result is a real face, determining that the object to be detected is a living object.
2. The method of claim 1, wherein characterizing the real human eye comprises:
the method is characterized by comprising any one or more of the following real human eye characteristics:
normal real eye characteristics;
abnormal-sharpness human eye features;
luminance anomaly human eye features;
integrity abnormal human eye features.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
and determining the object to be detected as a non-living object according to the human eye recognition result and/or the human face recognition result.
4. The method according to claim 3, wherein determining that the object to be detected is a non-living object according to the human eye recognition result and/or the human face recognition result comprises:
if the human eye recognition result is a pseudo human eye, determining that the object to be detected is a non-living object; alternatively, the first and second electrodes may be,
and if the face recognition result is a fake face, determining that the object to be detected is a non-living object.
5. The method of claim 1, further comprising:
acquiring a plurality of non-living objects and images to be trained of the living objects;
acquiring a human eye training image and a human face training image of each image to be trained;
and training an original model by using the human eye training images and the human face training images respectively to obtain the human eye neural network model and the human face neural network model.
6. The method of claim 5, wherein training an original model with each of the human eye training image and the human face training image to obtain the human eye neural network model and the human face neural network model comprises:
inputting each human eye training image and the corresponding human eye feature label into a first original model for training to obtain the human eye neural network model;
respectively inputting each face training image and the corresponding face feature label into a second original model for training to obtain the face neural network model;
the human eye feature labels comprise normal real human eyes, abnormal definition human eyes, abnormal brightness human eyes, abnormal integrity human eyes and fake human eyes;
the face feature labels comprise real faces and fake faces.
7. A living body detection device, comprising:
the first acquisition module is used for acquiring an image to be detected of an object to be detected;
the first input module is used for inputting the human eye image of the image to be detected into a human eye neural network model to obtain a human eye recognition result;
the second input module is used for inputting the face image of the image to be detected into the face neural network model to obtain a face recognition result when the eye recognition result has real eye characteristics;
and the determining module is used for determining the object to be detected as a living object when the face recognition result is a real face.
8. The apparatus of claim 7, further comprising:
the second acquisition module is used for acquiring a plurality of non-living objects and images to be trained of the living objects;
the acquisition module is used for acquiring the human eye training image and the human face training image of each image to be trained;
and the training module is used for training an original model by using the human eye training images and the human face training images respectively to obtain the human eye neural network model and the human face neural network model.
9. An electronic device, comprising:
at least one processor; and
at least one memory communicatively coupled to the processor, wherein:
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any of claims 1 to 6.
10. A computer-readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 6.
CN202110565210.5A 2021-05-24 2021-05-24 Living body detection method and device and electronic equipment Pending CN113255516A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110565210.5A CN113255516A (en) 2021-05-24 2021-05-24 Living body detection method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110565210.5A CN113255516A (en) 2021-05-24 2021-05-24 Living body detection method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113255516A true CN113255516A (en) 2021-08-13

Family

ID=77183924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110565210.5A Pending CN113255516A (en) 2021-05-24 2021-05-24 Living body detection method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113255516A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554005A (en) * 2021-09-18 2021-10-26 北京的卢深视科技有限公司 Security verification method of face recognition system, electronic device and storage medium
CN114333011A (en) * 2021-12-28 2022-04-12 北京的卢深视科技有限公司 Network training method, face recognition method, electronic device and storage medium
CN115798002A (en) * 2022-11-24 2023-03-14 北京的卢铭视科技有限公司 Face detection method, system, electronic device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190522A (en) * 2018-08-17 2019-01-11 浙江捷尚视觉科技股份有限公司 A kind of biopsy method based on infrared camera
CN109871811A (en) * 2019-02-22 2019-06-11 中控智慧科技股份有限公司 A kind of living object detection method based on image, apparatus and system
CN111626163A (en) * 2020-05-18 2020-09-04 浙江大华技术股份有限公司 Human face living body detection method and device and computer equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190522A (en) * 2018-08-17 2019-01-11 浙江捷尚视觉科技股份有限公司 A kind of biopsy method based on infrared camera
CN109871811A (en) * 2019-02-22 2019-06-11 中控智慧科技股份有限公司 A kind of living object detection method based on image, apparatus and system
CN111626163A (en) * 2020-05-18 2020-09-04 浙江大华技术股份有限公司 Human face living body detection method and device and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨巨成等: "人脸识别活体检测综述", 《天津科技大学学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554005A (en) * 2021-09-18 2021-10-26 北京的卢深视科技有限公司 Security verification method of face recognition system, electronic device and storage medium
CN114333011A (en) * 2021-12-28 2022-04-12 北京的卢深视科技有限公司 Network training method, face recognition method, electronic device and storage medium
CN114333011B (en) * 2021-12-28 2022-11-08 合肥的卢深视科技有限公司 Network training method, face recognition method, electronic device and storage medium
CN115798002A (en) * 2022-11-24 2023-03-14 北京的卢铭视科技有限公司 Face detection method, system, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN107545241B (en) Neural network model training and living body detection method, device and storage medium
US20190034702A1 (en) Living body detecting method and apparatus, device and storage medium
CN113255516A (en) Living body detection method and device and electronic equipment
CN107609463B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN113221771B (en) Living body face recognition method, device, apparatus, storage medium and program product
CN113920309B (en) Image detection method, image detection device, medical image processing equipment and storage medium
CN110826646A (en) Robot vision testing method and device, storage medium and terminal equipment
CN112163470A (en) Fatigue state identification method, system and storage medium based on deep learning
CN112559341A (en) Picture testing method, device, equipment and storage medium
CN111104917A (en) Face-based living body detection method and device, electronic equipment and medium
CN112651311A (en) Face recognition method and related equipment
CN114299366A (en) Image detection method and device, electronic equipment and storage medium
CN112149570B (en) Multi-person living body detection method, device, electronic equipment and storage medium
KR20210008075A (en) Time search method, device, computer device and storage medium (VIDEO SEARCH METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM)
CN111680670B (en) Cross-mode human head detection method and device
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
CN112488054A (en) Face recognition method, face recognition device, terminal equipment and storage medium
CN112686122A (en) Human body and shadow detection method, device, electronic device and storage medium
CN108776959B (en) Image processing method and device and terminal equipment
CN113158773B (en) Training method and training device for living body detection model
CN111914850B (en) Picture feature extraction method, device, server and medium
CN111124862B (en) Intelligent device performance testing method and device and intelligent device
CN114639056A (en) Live content identification method and device, computer equipment and storage medium
CN113780163A (en) Page loading time detection method and device, electronic equipment and medium
CN115004245A (en) Target detection method, target detection device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210813