CN111160235A - Living body detection method and device and electronic equipment - Google Patents

Living body detection method and device and electronic equipment Download PDF

Info

Publication number
CN111160235A
CN111160235A CN201911377492.5A CN201911377492A CN111160235A CN 111160235 A CN111160235 A CN 111160235A CN 201911377492 A CN201911377492 A CN 201911377492A CN 111160235 A CN111160235 A CN 111160235A
Authority
CN
China
Prior art keywords
region
image
detected
living body
pixel values
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911377492.5A
Other languages
Chinese (zh)
Inventor
杨大业
宋建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911377492.5A priority Critical patent/CN111160235A/en
Publication of CN111160235A publication Critical patent/CN111160235A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a living body detection method, a living body detection device and electronic equipment, wherein a first image and a second image which are in different light states and have a time interval smaller than a first threshold value are obtained by changing the light state of an object to be detected, so that a first region and a second region (both comprising an iris region) corresponding to the same body part of the object to be detected, and a third region and a fourth region corresponding to the same body part are extracted from the first image and the second image, pixel values of the four extracted regions are utilized to determine whether the object to be detected is a living body, an infrared camera with higher performance is not required to be configured, the cost is saved, spectral constraint is avoided, and the application range of living body detection is improved; compared with a method for directly extracting the characteristics of one image (with illumination or without illumination) to realize the in vivo detection, the method improves the accuracy of the in vivo detection.

Description

Living body detection method and device and electronic equipment
Technical Field
The present disclosure relates to image analysis technologies, and more particularly, to a method and an apparatus for detecting a living body, and an electronic device.
Background
As a biological detection technology rapidly developed in recent years, face recognition is performed by performing face detection and tracking on an image or video stream acquired by an image acquisition device and performing face recognition on a detected face region, so as to realize identity recognition on an acquired object.
In practical application, in order to improve the reliability and security of user identity recognition realized based on a face recognition technology, living body detection is usually adopted first to judge whether a detected face is a real face or a forged face attack, such as a legal user photo, a video shot in advance and the like, so as to ensure that a subsequent face image analysis object is a real face, reduce the technical risk of the face recognition technology, and particularly improve the security of user property in the application in the field of financial payment.
However, in the existing in-vivo detection method, a high-performance infrared camera is usually used to replace a common RGB camera for image acquisition, and the in-vivo detection is realized by using the difference between the absorption and reflection intensities of a real face and a non-in-vivo carrier on a near-infrared band, but this method cannot be applied to image analysis after spectrum adjustment, and the hardware cost is high.
Disclosure of Invention
In view of the above, the present application provides a method for in vivo detection, the method comprising:
acquiring a first image and a second image of an object to be detected, wherein the acquisition time interval of the first image and the second image is smaller than a first specific value, and the first image and the second image are generated when the object to be detected is in different light states;
extracting a first region and a second region in the first image, and a third region and a fourth region in the second image, wherein the first region and the third region correspond to the same body part of the object to be detected and comprise an iris region, and the second region and the fourth region correspond to the same body part of the object to be detected;
and determining whether the object to be detected is a living body according to the pixel values of the corresponding positions of the first region and the third region and the pixel values of the corresponding positions of the second region and the fourth region.
In some embodiments, the determining whether the object to be detected is a living body according to the pixel values of the corresponding positions of the first region and the third region and the pixel values of the corresponding positions of the second region and the fourth region includes:
acquiring respective pixel values of the first region, the second region, the third region and the fourth region;
calculating pixel values of corresponding positions of the first area and the third area to obtain a first feature vector;
calculating pixel values of corresponding positions of the second area and the fourth area to obtain a second feature vector;
merging the first feature vector and the second feature vector to obtain a living body detection feature vector;
and determining whether the object to be detected is a living body or not by using the living body detection characteristic vector and the attack sample characteristic vector.
In some embodiments, determining whether the object to be detected is a living object by using the living object detection feature vector and the attack sample feature vector includes:
and inputting the living body detection characteristic vector into a living body classification model to obtain a classification result of whether the object to be detected is a living body, wherein the classification model is obtained by performing classification training on the living body detection characteristic vector of the detection object sample and the attack sample characteristic vector of the attack sample.
In some embodiments, the determining whether the object to be detected is a living body according to the pixel values of the corresponding positions of the first region and the third region and the pixel values of the corresponding positions of the second region and the fourth region further includes:
respectively reforming the pixel values of the first region, the second region, the third region and the fourth region with the first resolution to obtain a first region and a third region with the second resolution, and a second region and a fourth region with the third resolution;
wherein the second resolution and the third resolution are both less than the first resolution.
In some embodiments, the acquiring the first image and the second image of the object to be detected includes:
responding to a starting instruction aiming at an instantaneous light source, and acquiring a first image of an object to be detected when the instantaneous light source is started and a second image of the object to be detected after the instantaneous light source is closed;
wherein, the object to be detected is positioned in the light irradiation range of the instantaneous light source.
In some embodiments, the acquiring the first image and the second image of the object to be detected includes:
acquiring a first image of an object to be detected, which is irradiated by a screen of the electronic equipment under first brightness;
responding to a screen brightness adjusting instruction, and controlling the screen of the electronic equipment to be adjusted from first brightness to second brightness;
and acquiring a second image of the object to be detected in the process of adjusting the first brightness to the second brightness of the screen of the electronic equipment.
In some embodiments, the obtaining the feature vector by performing an operation using pixel values at corresponding positions of different regions includes:
the pixel values of corresponding positions of different areas are reformed to obtain a pixel matrix;
vectorizing the pixel matrix, and sequencing the obtained one-dimensional vector elements;
and selecting a first number of elements with larger element values according to the sorting result to form the feature vectors corresponding to the different regions.
The present application further provides a living body detection apparatus, the apparatus comprising:
the device comprises an image acquisition module, a first image acquisition module and a second image acquisition module, wherein the image acquisition module is used for acquiring a first image and a second image of an object to be detected, the acquisition time interval of the first image and the second image is smaller than a first specific value, and the first image and the second image are generated when the object to be detected is in different light states;
the region extraction module is configured to extract a first region and a second region in the first image, and a third region and a fourth region in the second image, where the first region and the third region correspond to the same body part of the object to be detected and include an iris region, and the second region and the fourth region correspond to the same body part of the object to be detected;
and the living body detection module is used for determining whether the object to be detected is a living body according to the pixel values of the corresponding positions of the first area and the third area and the pixel values of the corresponding positions of the second area and the fourth area.
The present application also proposes a storage medium having stored thereon a program which is executed by a processor to implement the living body detecting method as described above.
The present application further provides an electronic device, comprising: at least one memory and at least one processor, wherein:
the memory for storing a program for implementing the living body detection method as described above;
the processor is used for loading and executing the program stored in the memory so as to realize the steps of the living body detection method.
Therefore, compared with the prior art, the application provides a living body detection method, a device and an electronic device, in the process of collecting images of an object to be detected, the first image and the second image which are in different light states and have the collection time interval smaller than a first threshold value are obtained by changing the light state of the object to be detected, so that the iris region characteristics of the object to be detected (in the case of a living body) in the two images are greatly changed, and the characteristics of other regions of the face are basically unchanged and can be reflected on the pixel values of corresponding positions, therefore, the application can extract the first region and the second region (both including the iris region) corresponding to the same body part of the object to be detected, and the third region and the fourth region corresponding to the same body part from the two images obtained in different light states, therefore, whether the object to be detected is a living body is determined by using the extracted pixel values of the four regions, an infrared camera with higher performance is not required to be configured, the cost is saved, the spectrum constraint is avoided, and the application range of the living body detection is widened; compared with a method for directly extracting the characteristics of one image (with illumination or without illumination) to realize the in vivo detection, the method improves the accuracy of the in vivo detection.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating a scenario of a biopsy method proposed in the present application;
FIG. 2 is a schematic flow chart diagram illustrating an alternative example of the in-vivo detection method set forth in the present application;
FIG. 3 shows a schematic flow diagram of yet another alternative example of the in-vivo detection method proposed by the present application;
FIG. 4 shows a schematic flow diagram of yet another alternative example of the in-vivo detection method proposed by the present application;
FIG. 5 shows a schematic flow diagram of yet another alternative example of the in-vivo detection method proposed by the present application;
FIG. 6 is a schematic view illustrating a scene flow of the in-vivo detection method proposed in the present application;
FIG. 7 is a schematic structural view showing an alternative example of the living body detecting apparatus proposed in the present application;
FIG. 8 is a schematic structural view showing still another alternative example of the living body detecting apparatus proposed by the present application;
FIG. 9 is a schematic structural view showing still another alternative example of the living body detecting apparatus proposed by the present application;
FIG. 10 is a schematic structural view showing still another alternative example of the living body detecting device proposed by the present application;
fig. 11 is a schematic diagram illustrating a hardware structure of an electronic device according to an embodiment of the present application;
fig. 12 is a schematic diagram illustrating a hardware structure of an electronic device according to still another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be understood that "system", "apparatus", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this application and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements. An element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two. The terms "first", "second" and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature.
Additionally, flow charts are used herein to illustrate operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
The computer vision technology is a commonly used artificial intelligence technology at present, and means that a camera and a computer are used for replacing human eyes, machine vision such as identification, tracking and measurement is carried out on a target object, and further graphic processing is carried out, so that the computer processing becomes an image more suitable for human eye observation or transmitted to an instrument for detection. Among them, face recognition is a biological feature recognition technology of an artificial intelligence computer vision technology, which has been widely applied to many fields, and in face recognition application, living body detection controls an important link of authentication security, such as remote verification of banks, x-letter face payment, x-drip driver remote authentication, community access control systems, and the like.
In view of the technical problems described in the background section, it is desirable to be able to use image information collected by a common camera (e.g., a camera of an electronic device), and also to be able to perform reliable biopsy without replacing an expensive infrared camera, a depth camera, and the like.
Specifically, referring to a scene schematic diagram shown in fig. 1 and applicable to the living body detection method provided by the present application, an electronic device may acquire a first image and a second image of an object to be detected in different light states, for example, in a process that the electronic device turns on a flash, corresponding images acquired before and after the flash is turned on (i.e., during the flash is turned on) may be acquired and respectively recorded as the first image and the second image, and then, since a real user, i.e., a living body, before and after the light state changes, features (e.g., pixel values) of an iris area of the living body may change greatly, and features of images in other areas change very little; before and after the light state of the attack sample such as a photo changes, the characteristic change of each region is very small, so that the method and the device can determine whether the object to be detected is a living body by analyzing the change of the pixel values of the regions (at least one region comprises an iris) corresponding to the same body part in the first image and the second image.
As shown in fig. 1, the implementation process of the in-vivo detection method provided by the present application may be implemented by an electronic device itself, or, as needed, after the first image and the second image are collected by the electronic device, the first image and the second image are sent to other electronic devices (such as a server or other terminal devices) with data processing capability for analysis, and a result of the in-vivo detection is obtained and fed back to the electronic device, or other devices in an environment where an object to be detected is located, such as a cell access control device or a bank device in fig. 1, and the like, it can be seen that the electronic device implementing the in-vivo detection method in the present application may be a terminal device, such as a notebook computer, a mobile phone, a desktop computer, an intelligent home device, or a server, and the like, the product type of the electronic device is not limited in the present application, and fig. 1 is only a schematic illustration of an application scenario of the in-vivo detection, the application scenario shown in fig. 1 is not limited, and may be flexibly adjusted according to actual requirements.
Referring to fig. 2, a flow chart illustrating an alternative example of the living body detection method proposed by the present application, which may be applied to an electronic device, may include, as shown in fig. 2:
step S11, acquiring a first image and a second image of an object to be detected;
it should be noted that, in this embodiment, the acquisition time interval of the first image and the second image is smaller than a first specific value, and the first image and the second image are generated when the object to be detected is in different light states, and a specific generation manner and an acquisition manner are not limited.
In some embodiments, in combination with the above analysis, the camera of the electronic device may be configured to turn on the flash lamp to obtain a first image of the object to be detected during turning on the front flash lamp, and obtain a second image of the object to be detected after a certain time (e.g., 300ms), where the object to be detected is not in an illumination state during collecting the second image and is in an illumination state during collecting the first image, so that the object to be detected in the first image and the second image is in different light states
Therefore, the different light states can be different in light intensity state, the mode of generating the different light states is not limited to the mode of generating the different light states by using the front-mounted flash lamp of the electronic equipment, and for the electronic equipment without the front-mounted flash lamp, the flash lamp can be simulated in a mode of instantly adjusting the screen brightness of the electronic equipment to generate the different light states, so that the first image and the second image of the object to be detected in the different light states can be acquired.
Of course, the present application may also adjust the light state of the environment where the object to be detected is located by using an independent lighting device to obtain the first image and the second image, and the detailed implementation process of step S11 is not described in detail in this application.
In practical applications, since the living body detection is usually applied to a scene that requires user identity authentication, if the scene adopts a face recognition technology to realize user identity recognition, the first image and the second image acquired in step S11 may be face images of an object to be detected.
Step S12, extracting a first region and a second region in the first image, and a third region and a fourth region in the second image;
it should be noted that, in this embodiment, the extracted first region and the extracted third region correspond to the same body part of the object to be detected and include an iris region, and the extracted second region and the extracted fourth region correspond to the same body part of the object to be detected.
In a possible implementation manner, the first region may be an eye region of an object to be detected in the first image, the third region is an eye region of an object to be detected in the second image, the second region is a face region of an object to be detected in the first image, and the fourth region is a face region of an object to be detected in the second image. As described above, the body part regions of the object to be detected represented by the first region, the second region, the third region and the fourth region are not limited to those described in this paragraph, and can be flexibly determined according to actual needs to improve flexibility of the living body detection.
In this embodiment, an implementation manner of how to extract a region from an image is not limited, and the implementation manner may be implemented by using a feature extraction algorithm, that is, after determining the contents of the first region, the second region, the third region, and the fourth region, feature extraction may be performed on the corresponding first image or the second image to obtain a corresponding region image, for example, an eye region in the first image is extracted and recorded as the first region, a face region is extracted and recorded as the second region, and the like, and a specific implementation process is not described in detail.
Step S13, determining whether the object to be detected is a living body based on the pixel values of the corresponding positions of the first region and the third region, and the pixel values of the corresponding positions of the second region and the fourth region.
In combination with the above analysis of the inventive concept of the present application, since the first region and the third region include the iris region of the object to be detected, if the object to be detected is a living body, the pixel values of the iris regions acquired in different light states have a large difference, and the pixel values of other regions in the human face have a small difference, in this embodiment, it is possible to determine whether the object to be detected is a living body by changing the pixel values of the first region and the third region and the pixel values of the second region and the fourth region.
It should be noted that, the present application is not limited to the method for analyzing the pixel value variation of the regions corresponding to the same body part collected in different light conditions, that is, the specific implementation method of step S13 is not limited.
In summary, in this embodiment, in order to implement living body detection, when an image of an object to be detected is acquired, a light state of the object to be detected may be changed, and a first image and a second image, which are in different light states and have a short acquisition time interval, are acquired, so that iris region features of the object to be detected (in the case of a living body) in the two images are greatly changed, and other region features of the face are basically unchanged, and can be reflected on pixel values at corresponding positions.
Therefore, the embodiment can extract the first area and the second area (both including the iris area) corresponding to the same body part of the object to be detected, and the third area and the fourth area corresponding to the same body part from two images obtained under different light conditions, and determine whether the object to be detected is a living body by using the pixel values of the four areas, without configuring a camera with high performance, high price and high configuration such as an infrared camera, and the like, so that the hardware cost is saved, the spectrum constraint is avoided, and the application range of the living body detection is improved.
In addition, compared with the traditional method for realizing the in-vivo detection by directly extracting the characteristics of a pair of images (with illumination or without illumination) collected under the fixed light state, the method for realizing the in-vivo detection based on the analysis results of the images of the object to be detected under different light states greatly improves the accuracy of the in-vivo detection.
Referring to fig. 3, a flow chart illustrating a further alternative example of the living body detection method proposed by the present application is shown, and the method may be a refined implementation of the living body detection described in the foregoing embodiment, and as shown in fig. 3, the refined implementation proposed by the present embodiment may include:
step S21, acquiring a first image and a second image of an object to be detected;
step S22, extracting a first region and a second region in the first image, and a third region and a fourth region in the second image;
for specific implementation processes of step S21 and step S22, reference may be made to the description of corresponding parts in the foregoing embodiments, and details are not repeated in this embodiment.
Step S23, acquiring respective pixel values of the first region, the second region, the third region, and the fourth region;
step S24, calculating pixel values of corresponding positions of the first area and the third area to obtain a first feature vector;
step S25, calculating pixel values of corresponding positions of the second area and the fourth area to obtain a second feature vector;
in practical applications, feature extraction is a concept in computer vision and image processing, and refers to using a computer to extract image information and determine whether a point of each image belongs to an image feature, and the result of feature extraction may be to divide the points on the image into different subsets, where the subsets often belong to isolated points, continuous curves, or continuous regions. In this implementation, the obtained respective pixel points of the plurality of regions may be used as feature points of the corresponding region to form a feature vector of the region.
In the application, for the extraction process of the feature vectors of the images in different areas, according to actual needs, an HOG (Histogram of Oriented Gradient) feature extraction mode can be selected to obtain the features of the area images of different body parts corresponding to the object to be detected, so as to form corresponding feature vectors; for the feature extraction of the region image, an LBP (local binary Pattern) feature extraction mode or a Haar-like feature extraction mode may be adopted, and details are not described herein.
Based on the above analysis, in practical application of this embodiment, after obtaining the pixel values of the first region, the second region, the third region, and the fourth region, a two-dimensional pixel matrix may be formed by using the pixel values of the two regions of the same body part, and then the two-dimensional pixel matrix is converted into a one-dimensional feature vector, and a specific implementation process is not limited.
Certainly, in order to improve the image processing efficiency, the first and second color images obtained in the present application may also be converted into corresponding grayscale images, and after the first and second regions in the first grayscale image and the third and fourth regions in the second grayscale image are determined in the above manner, the grayscale values of the pixel points included in the regions are used to perform binarization processing on the regions, so as to obtain a two-dimensional grayscale matrix, which is then converted into a one-dimensional feature vector.
Based on the above analysis, in a possible implementation manner, the present application may adopt the manner shown in fig. 4 to obtain the first feature vector and the second feature vector, and since the obtaining manners of the first feature vector and the second feature vector are similar, the present application does not make details one by one, as shown in fig. 4, the process of obtaining the feature vector by performing the operation using the pixel values of the corresponding positions of different areas proposed by the present application may include, but is not limited to, the following steps:
step A1, the pixel values of corresponding positions of different areas are reformed to obtain a pixel matrix;
it should be noted that the different regions in step a1 refer to regions corresponding to the same body part in the first image and the second image, such as the first region and the third region, or the second region and the fourth region. For convenience of description, if the image acquired in the state with better light condition is a first image, if the first image of the object to be detected is acquired under flash light irradiation, the image acquired in the state with poorer light condition is a second image, if the second image of the object to be detected is acquired under no flash light irradiation, the pixel value included in the region belonging to the first image is recorded as S _ f, the pixel value included in the region belonging to the second image can be recorded as S _ b, and the pixel value can be the brightness value of the corresponding pixel point.
Then, the pixel values at corresponding positions in different regions may be rearranged according to an operation formula of (S _ F-S _ b)/(S _ F + S _ b) to obtain a pixel matrix, where F may represent an element in the pixel matrix, and as can be seen from the formula, the elements at different positions in the pixel matrix may be calculated from the pixel values at corresponding positions in the first region and the third region, or calculated from the pixel values at corresponding positions in the second region and the fourth region.
Step A2, vectorizing the pixel matrix, and sequencing the obtained one-dimensional vector elements;
step A3, according to the sorting result, selecting the first number of elements with larger element value to form the feature vector corresponding to the different region.
In this embodiment, based on the above-described obtaining manner of the pixel matrix, the number of rows and the number of columns of the pixel matrix may be determined according to the number of rows and the number of columns of the pixel points included in any one of the different regions, and the vectorization of the pixel matrix may be ending and splicing each row vector in the pixel matrix to obtain a one-dimensional vector, or ending and splicing each column vector in the pixel matrix to obtain a one-dimensional vector, that is, performing the dimension reduction processing on the high-dimensional pixel matrix to obtain the one-dimensional vector.
In this embodiment, one-dimensional vector elements that can reflect changes in characteristics of different corresponding regions can be selected from the one-dimensional vector elements to form feature vectors corresponding to the different regions.
Specifically, the larger the value of the one-dimensional vector element is, the more obvious the characteristic thereof is, and accordingly, the more beneficial the realization of the in-vivo detection is, in this embodiment, the element value is selected to reach the element specific value from the obtained one-dimensional vector elements, or the first number of elements with the larger element value is selected to form the feature vectors corresponding to the corresponding different regions.
Therefore, in the present application, with reference to the feature vector obtaining manner described in fig. 4, the pixel values at the corresponding positions of the first region and the third region may be processed to obtain the first feature vector, and the pixel values at the corresponding positions of the second region and the fourth region may be processed to obtain the second feature vector, which is not repeated again in the specific implementation process.
Step S26, merging the first characteristic vector and the second characteristic vector to obtain a living body detection characteristic vector;
and step S27, determining whether the object to be detected is a living body or not by using the living body detection characteristic vector and the attack sample characteristic vector.
Following the above analysis, the first feature vector may represent feature changes of body parts corresponding to the first region and the third region of the object to be detected, especially image feature changes of the iris region in the process of changing the illumination intensity, and similarly, the second feature vector represents feature changes of body parts corresponding to the second region and the fourth region of the object to be detected in the process of changing the light state, which may include image feature changes of other regions of the object to be detected except for the iris region, so that the first feature vector and the second feature vector are combined to obtain a living body detection feature vector, which can represent image feature changes represented by the first feature vector and the second feature vector.
The attack sample feature vector may be a feature vector obtained by processing a corresponding attack sample (such as a photo) in the above manner, that is, the feature vector obtained after the object to be detected in the above step is replaced with the corresponding attack sample for processing, and the specific obtaining process is not described again.
As described above, since the living body and the corresponding feature vector obtained from the attack sample have a large difference, especially the vector elements corresponding to the first feature vector included therein, in general, the elements of the first feature vector corresponding to the living body are larger than those of the first feature vector corresponding to the attack sample, and the elements of the second feature vector corresponding to the living body are also larger, therefore, the embodiment can determine whether the object to be detected is a living body by comparing the living body detection feature vector with the attack sample feature vector.
In a possible implementation manner, after the living body detection feature vector is obtained, the living body detection feature vector may be input to a living body classification model obtained through pre-training to obtain a classification result of whether the object to be detected is a living body, the living body classification model may be obtained by performing classification training on the living body detection feature vector of the detection object sample and the attack sample feature vector of the attack sample, and the training process of the living body classification model is not described in detail in the present application.
In summary, in the embodiment, different images are obtained by obtaining the object to be detected in different light states and are recorded as the first image and the second image, and then, the present application does not directly perform feature extraction on the object to be detected to obtain the feature vector, but extracts different regions corresponding to the same body part from the two images, respectively, where one group of the different regions includes an iris region of the object to be detected, and then obtains corresponding feature vectors by using pixel value changes at corresponding positions of the different regions, and then combines the feature vectors to obtain the living body detection feature vector, so as to reliably and quickly implement living body detection.
Referring to fig. 5, a schematic flow chart illustrating a further optional example of the in-vivo detection method proposed by the present application, which may be a further optional detailed implementation of the in-vivo detection method described in the foregoing embodiment, as shown in fig. 5, the detailed implementation proposed by the present embodiment may include:
step S31, responding to the opening instruction aiming at the instantaneous light source, and acquiring a first image of the object to be detected when the instantaneous light source is opened and a second image of the object to be detected after the instantaneous light source is closed;
the object to be detected is located in the light irradiation range of the instantaneous light source, and the instantaneous light source and the light irradiation range thereof are not limited by the application.
With reference to the description of the process for acquiring the first image and the second image in the foregoing embodiment, in a further alternative implementation manner, the process for acquiring the first image and the second image of the object to be detected may include:
acquiring a first image of an object to be detected, which is irradiated by a screen of the electronic equipment under first brightness; responding to a screen brightness adjusting instruction, and controlling the screen of the electronic equipment to be adjusted from first brightness to second brightness; and acquiring a second image of the object to be detected in the process of adjusting the screen of the electronic equipment from the first brightness to the second brightness, wherein the first brightness is greater than the second brightness. It can be seen that in this implementation, the screen light of the electronic device is used as the instantaneous light source, but is not limited to the control of the screen illumination.
As can be seen from the above analysis, the pixel value of each pixel point in the first image is greater than the pixel value of the pixel point at the corresponding position in the second image, and the specific value of the specific pixel value is not limited.
Step S32, extracting a first region and a second region in the first image, and a third region and a fourth region in the second image;
with regard to the specific implementation of step S32, reference may be made to the description of the corresponding parts of the above-described embodiments.
As described above, in order to improve the data processing efficiency, as shown in the scene diagram of fig. 6, the first region and the third region may be eye regions of the object to be detected, and the second region and the fourth region may be face regions (regions other than eyes) of the region to be detected, but are not limited thereto, for example, the second region and the fourth region may be the entire face region of the object to be detected, and so on.
Step S33, respectively, of reforming the pixel values of the first region, the second region, the third region, and the fourth region with the first resolution to obtain the first region and the third region with the second resolution, and the second region and the fourth region with the third resolution;
it should be noted that, the second resolution and the third resolution are both smaller than the first resolution, specific values of the first resolution, the second resolution and the third resolution are not limited in this embodiment, the second resolution may be the same as or different from the third resolution, and in general, since the first region and the third region include an iris region, the second resolution of this embodiment may be larger than the third resolution in order to keep iris features reliably extracted.
Therefore, in this embodiment, before processing the region image, the resolution of the region image may be reduced to improve the image processing efficiency, and the implementation process of how to reduce the resolution of the region image is not described in detail in this application. It should be understood that, if the second region and the fourth region are the entire face region of the object to be detected, since the iris region included in the second region and the fourth region occupies a small proportion of the entire face region, after the resolution reduction processing is performed on the entire face region image, the features of the iris region may not be obvious, and in the subsequent feature extraction process, the features of the iris region may be ignored, and only the features of the face region with a large area are extracted.
Step S34, acquiring respective pixel values of the first region, the second region, the third region, and the fourth region;
it should be understood that, in this embodiment, after the resolution reduction processing is completed, the pixel values corresponding to the regions, such as the brightness values of the pixels included in the regions, are obtained.
Step S35, calculating pixel values of corresponding positions of the first area and the third area to obtain a first feature vector;
step S36, calculating pixel values of corresponding positions of the second area and the fourth area to obtain a second feature vector;
step S37, merging the first characteristic vector and the second characteristic vector to obtain a living body detection characteristic vector;
the implementation process of step S34 to step S37 can be according to the description of the corresponding parts of the above embodiments, and will not be described again.
With reference to the scene shown in fig. 6, in the process of turning on the instantaneous light source, the iris area of the living body generates light spots, but the iris area of the attack sample, such as a photograph, does not generate light spots, so that the first feature vectors of the living body and the attack sample are different, and in particular, living body detection can be realized.
Similarly, in the process of turning on the instantaneous light source, the feature changes at different positions of the face region of the living body are obvious, while the feature changes at different positions of the face region of the attack sample are very small, as shown in fig. 6, the living body detection can be realized by using the region.
Based on the analysis, in order to improve the accuracy of the in-vivo detection, the first feature vector and the second feature vector obtained in the above manner are combined to be used as the in-vivo detection feature vector to realize the in-vivo detection, and the specific acquisition process of the in-vivo detection feature vector is not described in detail.
Step S38, inputting the living body detection feature vector into a living body classification model to obtain a classification result of whether the object to be detected is a living body;
the living body classification model can be obtained by performing classification training on the living body detection feature vector of the detection object sample and the attack sample feature vector of the attack sample, and the specific training process is not described in detail.
Step S39, according to the classification result, execute the corresponding default operation.
In this embodiment, the content of the preset operation may be determined according to an application scenario of live body detection, and the preset operation corresponding to the classification result that the object to be detected is a live body may include starting an operating system of the electronic device, completing payment of virtual resources, controlling specific application to run, controlling access control device to open, and the like.
In summary, under the condition that the object to be detected is within the light irradiation range of the instant light source, the instant light source is controlled to be turned on to obtain the images of the object to be detected under different brightness, i.e. a first image under illumination and a second image without illumination, from which images, selecting different regions corresponding to the same body part, and performing resolution reduction processing on the obtained regions, to improve the subsequent image processing efficiency, then, generating corresponding characteristic vectors by utilizing the change of pixel values of different areas corresponding to the same body part, combining the characteristic vectors respectively corresponding to different body parts to obtain the living body detection characteristic vectors, inputting the living body detection characteristic vectors into a living body classification model to quickly and accurately obtain the classification result of whether the object to be detected is a living body, the method and the device can execute corresponding preset operation to meet the user requirements, such as improvement of the use safety of the electronic equipment.
Referring to fig. 7, there is shown a schematic structural view of an alternative example of the living body detecting apparatus proposed by the present application, which may be applied to an electronic device, and as shown in fig. 7, the living body detecting apparatus may include:
the image acquisition module 11 is configured to acquire a first image and a second image of an object to be detected;
the acquisition time interval of the first image and the second image is smaller than a first specific value, and the first image and the second image are generated when the object to be detected is in different light states.
A region extracting module 12, configured to extract a first region and a second region in the first image, and a third region and a fourth region in the second image;
in this embodiment, the first region and the third region correspond to the same body part of the object to be detected and include an iris region, and the second region and the fourth region correspond to the same body part of the object to be detected.
The living body detection module 13 is configured to determine whether the object to be detected is a living body according to the pixel values of the corresponding positions of the first region and the third region and the pixel values of the corresponding positions of the second region and the fourth region.
In some embodiments, as shown in fig. 8, the above-mentioned living body detecting module 13 may include:
a pixel value obtaining unit 131, configured to obtain pixel values of the first region, the second region, the third region, and the fourth region;
a first feature vector obtaining unit 132, configured to perform operation on pixel values at corresponding positions of the first region and the third region to obtain a first feature vector;
a first eigenvector obtaining unit 133, configured to perform operation on pixel values at corresponding positions of the second region and the fourth region to obtain a second eigenvector;
in some embodiments, the process of obtaining the feature vector by performing an operation using pixel values of corresponding positions of different regions, that is, the first feature vector obtaining unit 132 and the first feature vector obtaining unit 133 may each include:
the pixel matrix obtaining unit is used for reforming pixel values of corresponding positions of different areas to obtain a pixel matrix;
the dimension reduction processing unit is used for vectorizing the pixel matrix and sequencing the obtained one-dimensional vector elements;
and the feature vector forming unit is used for selecting a first number of elements with larger element values according to the sorting result to form feature vectors corresponding to the different areas.
A feature vector merging unit 134, configured to merge the first feature vector and the second feature vector to obtain a living body detection feature vector;
a living body detection unit 135, configured to determine whether the object to be detected is a living body by using the living body detection feature vector and the attack sample feature vector.
In a possible implementation manner, the living body detection unit 135 may specifically include:
and the living body classification unit is used for inputting the living body detection characteristic vector into a living body classification model to obtain a classification result of whether the object to be detected is a living body, and the classification model is obtained by performing classification training on the living body detection characteristic vector of the detection object sample and the attack sample characteristic vector of the attack sample.
In one possible implementation manner, as shown in fig. 8, the living body detection module 13 may further include:
an image resolution adjusting unit 136, configured to respectively reform respective pixel values of the first region, the second region, the third region, and the fourth region with the first resolution to obtain a first region and a third region with the second resolution, and a second region and a fourth region with the third resolution;
wherein the second resolution and the third resolution are both less than the first resolution.
In some embodiments, as shown in fig. 9, the image acquisition module 11 may include:
a light source turn-on instruction corresponding unit 111, configured to respond to a turn-on instruction for an instantaneous light source, and acquire a first image of an object to be detected when the instantaneous light source is turned on and a second image of the object to be detected after the instantaneous light source is turned off;
wherein, the object to be detected is positioned in the light irradiation range of the instantaneous light source.
In still other embodiments, as shown in fig. 10, the image obtaining module 11 may also include:
a first image acquisition unit 112, configured to acquire a first image of an object to be detected illuminated by an electronic device screen at a first brightness;
the brightness adjusting unit 113 is used for responding to a screen brightness adjusting instruction and controlling the screen of the electronic equipment to be adjusted from first brightness to second brightness;
a second image obtaining unit 114, configured to obtain a second image of the object to be detected during a process of adjusting the screen of the electronic device from the first brightness to the second brightness.
It should be noted that, various modules, units, and the like in the embodiments of the foregoing apparatuses may be stored in the memory as program modules, and the processor executes the program modules stored in the memory to implement corresponding functions, and for the functions implemented by the program modules and their combinations and the achieved technical effects, reference may be made to the description of corresponding parts in the embodiments of the foregoing methods, which is not described in detail in this embodiment.
The present application further provides a storage medium, on which a program may be stored, where the program may be called and loaded by a processor to implement the steps of the living body detection method described in the foregoing embodiments, and the specific implementation process may refer to the description of the corresponding parts of the foregoing method embodiments.
Referring to fig. 11, a schematic diagram of a hardware structure of an electronic device provided in an embodiment of the present application is shown, where the electronic device may be a terminal device or a server, and the present application does not limit a product type of the electronic device, and as shown in fig. 11, the electronic device may include: at least one memory 21 and at least one processor 22, wherein:
data interaction between the memory 21 and the processor 22 can be realized through a communication bus, and the detailed communication process is not described in detail.
In the present embodiment, the memory 21 may be used to store a program for implementing the living body detection method described in the above-described method embodiments; the processor 22 may be configured to load and execute a program stored in the memory 21 to implement each step of the living body detection method described in the foregoing method embodiment, and for a specific implementation process, reference may be made to the description of the corresponding part of the foregoing method embodiment, which is not described in detail in this embodiment.
In some embodiments, the memory 21 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device or other volatile solid state storage device. The processor 22 may be a Central Processing Unit (CPU), an application-specific integrated circuit (ASIC), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA), or other programmable logic device.
In one possible implementation, the memory 21 may include a program storage area and a data storage area, and the program storage area may store an operating system, and application programs required for at least one function (such as an image processing function), a program for implementing the living body detection method proposed by the present application, and the like; the data storage area can store data generated in the using process of the electronic equipment, such as the acquired first image and second image of the object to be detected, the living body detection feature vector and the like.
It should be understood that the electronic device shown in fig. 11 is only an example, and should not bring any limitation to the functions and the scope of the application of the embodiments, that is, the structure of the electronic device shown in fig. 11 does not constitute a limitation to the electronic device in the embodiments of the application, and in practical applications, the electronic device may include more or less components than those shown in fig. 11, or some components in combination.
If the electronic device terminal device, such as a smart phone, a tablet computer, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, a desktop computer, a bank self-service device, etc., refer to fig. 12, the electronic device provided in the present application may further include at least one of a communication interface, an antenna, a sensor module, a power module, a touch sensing unit for sensing a touch event on the touch display panel, an input device such as a keyboard, a mouse, a camera, a sound pickup, etc., and at least one of an output device such as a display, a speaker, a vibration mechanism, a lamp, etc. The present application does not limit the specific structure of the terminal device, and fig. 12 is only an alternative example.
Finally, it should be noted that, in the present specification, the embodiments are described in a progressive or parallel manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device disclosed by the embodiment, the description is relatively simple because the device corresponds to the method disclosed by the embodiment, and the relevant part can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of in vivo detection, the method comprising:
acquiring a first image and a second image of an object to be detected, wherein the acquisition time interval of the first image and the second image is smaller than a first specific value, and the first image and the second image are generated when the object to be detected is in different light states;
extracting a first region and a second region in the first image, and a third region and a fourth region in the second image, wherein the first region and the third region correspond to the same body part of the object to be detected and comprise an iris region, and the second region and the fourth region correspond to the same body part of the object to be detected;
and determining whether the object to be detected is a living body according to the pixel values of the corresponding positions of the first region and the third region and the pixel values of the corresponding positions of the second region and the fourth region.
2. The method according to claim 1, wherein the determining whether the object to be detected is a living body according to the pixel values of the corresponding positions of the first region and the third region and the pixel values of the corresponding positions of the second region and the fourth region comprises:
acquiring respective pixel values of the first region, the second region, the third region and the fourth region;
calculating pixel values of corresponding positions of the first area and the third area to obtain a first feature vector;
calculating pixel values of corresponding positions of the second area and the fourth area to obtain a second feature vector;
merging the first feature vector and the second feature vector to obtain a living body detection feature vector;
and determining whether the object to be detected is a living body or not by using the living body detection characteristic vector and the attack sample characteristic vector.
3. The method of claim 2, wherein determining whether the object to be detected is a living object by using the living object detection feature vector and the attack sample feature vector comprises:
and inputting the living body detection characteristic vector into a living body classification model to obtain a classification result of whether the object to be detected is a living body, wherein the classification model is obtained by performing classification training on the living body detection characteristic vector of the detection object sample and the attack sample characteristic vector of the attack sample.
4. The method according to claim 2, wherein determining whether the object to be detected is a living body according to the pixel values of the corresponding positions of the first region and the third region and the pixel values of the corresponding positions of the second region and the fourth region, further comprises:
respectively reforming the pixel values of the first region, the second region, the third region and the fourth region with the first resolution to obtain a first region and a third region with the second resolution, and a second region and a fourth region with the third resolution;
wherein the second resolution and the third resolution are both less than the first resolution.
5. The method according to any one of claims 1 to 4, wherein the acquiring of the first image and the second image of the object to be detected comprises:
responding to a starting instruction aiming at an instantaneous light source, and acquiring a first image of an object to be detected when the instantaneous light source is started and a second image of the object to be detected after the instantaneous light source is closed;
wherein, the object to be detected is positioned in the light irradiation range of the instantaneous light source.
6. The method according to any one of claims 1 to 4, wherein the acquiring of the first image and the second image of the object to be detected comprises:
acquiring a first image of an object to be detected, which is irradiated by a screen of the electronic equipment under first brightness;
responding to a screen brightness adjusting instruction, and controlling the screen of the electronic equipment to be adjusted from first brightness to second brightness;
and acquiring a second image of the object to be detected in the process of adjusting the first brightness to the second brightness of the screen of the electronic equipment.
7. The method according to any one of claims 2 to 4, wherein the process of obtaining the feature vector by performing the operation using the pixel values of the corresponding positions in the different regions comprises:
the pixel values of corresponding positions of different areas are reformed to obtain a pixel matrix;
vectorizing the pixel matrix, and sequencing the obtained one-dimensional vector elements;
and selecting a first number of elements with larger element values according to the sorting result to form the feature vectors corresponding to the different regions.
8. A living body detection apparatus, the apparatus comprising:
the device comprises an image acquisition module, a first image acquisition module and a second image acquisition module, wherein the image acquisition module is used for acquiring a first image and a second image of an object to be detected, the acquisition time interval of the first image and the second image is smaller than a first specific value, and the first image and the second image are generated when the object to be detected is in different light states;
the region extraction module is configured to extract a first region and a second region in the first image, and a third region and a fourth region in the second image, where the first region and the third region correspond to the same body part of the object to be detected and include an iris region, and the second region and the fourth region correspond to the same body part of the object to be detected;
and the living body detection module is used for determining whether the object to be detected is a living body according to the pixel values of the corresponding positions of the first area and the third area and the pixel values of the corresponding positions of the second area and the fourth area.
9. A storage medium having a program stored thereon, the program being executed by a processor to implement the in-vivo detection method according to any one of claims 1 to 7.
10. An electronic device, comprising: at least one memory and at least one processor, wherein:
the memory for storing a program for implementing the in-vivo detection method according to any one of claims 1 to 7;
the processor is used for loading and executing the program stored in the memory so as to realize the steps of the living body detection method according to any one of claims 1-7.
CN201911377492.5A 2019-12-27 2019-12-27 Living body detection method and device and electronic equipment Pending CN111160235A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911377492.5A CN111160235A (en) 2019-12-27 2019-12-27 Living body detection method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911377492.5A CN111160235A (en) 2019-12-27 2019-12-27 Living body detection method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111160235A true CN111160235A (en) 2020-05-15

Family

ID=70558626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911377492.5A Pending CN111160235A (en) 2019-12-27 2019-12-27 Living body detection method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111160235A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738161A (en) * 2020-06-23 2020-10-02 支付宝实验室(新加坡)有限公司 Living body detection method and device and electronic equipment
CN114973426A (en) * 2021-06-03 2022-08-30 中移互联网有限公司 Living body detection method, device and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6594377B1 (en) * 1999-01-11 2003-07-15 Lg Electronics Inc. Iris recognition system
CN105320939A (en) * 2015-09-28 2016-02-10 北京天诚盛业科技有限公司 Iris biopsy method and apparatus
CN106778518A (en) * 2016-11-24 2017-05-31 汉王科技股份有限公司 A kind of human face in-vivo detection method and device
CN106845395A (en) * 2017-01-19 2017-06-13 北京飞搜科技有限公司 A kind of method that In vivo detection is carried out based on recognition of face
CN108764121A (en) * 2018-05-24 2018-11-06 释码融和(上海)信息科技有限公司 Method, computing device and readable storage medium storing program for executing for detecting live subject
CN110569808A (en) * 2019-09-11 2019-12-13 腾讯科技(深圳)有限公司 Living body detection method and device and computer equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6594377B1 (en) * 1999-01-11 2003-07-15 Lg Electronics Inc. Iris recognition system
CN105320939A (en) * 2015-09-28 2016-02-10 北京天诚盛业科技有限公司 Iris biopsy method and apparatus
CN106778518A (en) * 2016-11-24 2017-05-31 汉王科技股份有限公司 A kind of human face in-vivo detection method and device
CN106845395A (en) * 2017-01-19 2017-06-13 北京飞搜科技有限公司 A kind of method that In vivo detection is carried out based on recognition of face
CN108764121A (en) * 2018-05-24 2018-11-06 释码融和(上海)信息科技有限公司 Method, computing device and readable storage medium storing program for executing for detecting live subject
CN110569808A (en) * 2019-09-11 2019-12-13 腾讯科技(深圳)有限公司 Living body detection method and device and computer equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738161A (en) * 2020-06-23 2020-10-02 支付宝实验室(新加坡)有限公司 Living body detection method and device and electronic equipment
CN111738161B (en) * 2020-06-23 2024-02-27 支付宝实验室(新加坡)有限公司 Living body detection method and device and electronic equipment
CN114973426A (en) * 2021-06-03 2022-08-30 中移互联网有限公司 Living body detection method, device and equipment
CN114973426B (en) * 2021-06-03 2023-08-15 中移互联网有限公司 Living body detection method, device and equipment

Similar Documents

Publication Publication Date Title
KR102324697B1 (en) Biometric detection method and device, electronic device, computer readable storage medium
Menotti et al. Deep representations for iris, face, and fingerprint spoofing detection
CN107766786B (en) Activity test method and activity test computing device
CN107944379B (en) Eye white image super-resolution reconstruction and image enhancement method based on deep learning
CN112215180B (en) Living body detection method and device
CN110008813B (en) Face recognition method and system based on living body detection technology
Raja et al. Binarized statistical features for improved iris and periocular recognition in visible spectrum
Piciucco et al. Palm vein recognition using a high dynamic range approach
Elmannai et al. Deep learning models combining for breast cancer histopathology image classification
Abaza et al. On ear-based human identification in the mid-wave infrared spectrum
Alonso‐Fernandez et al. Facial masks and soft‐biometrics: Leveraging face recognition CNNs for age and gender prediction on mobile ocular images
Narang et al. Face recognition in the SWIR band when using single sensor multi-wavelength imaging systems
Reddy et al. Generalizable deep features for ocular biometrics
Cheema et al. Sejong face database: A multi-modal disguise face database
Feng et al. Towards racially unbiased skin tone estimation via scene disambiguation
CN111160235A (en) Living body detection method and device and electronic equipment
Niu et al. Automatic localization of optic disc based on deep learning in fundus images
JP7135303B2 (en) Computer program, detection device, imaging device, server, detector, detection method and provision method
Zhi et al. Micro-expression recognition with supervised contrastive learning
Liao et al. Unconstrained face detection
Sun et al. Understanding deep face anti-spoofing: from the perspective of data
Fahad et al. Skinnet-8: An efficient cnn architecture for classifying skin cancer on an imbalanced dataset
Vasanthi et al. A hybrid method for biometric authentication-oriented face detection using autoregressive model with Bayes Backpropagation Neural Network
Subudhiray et al. K-nearest neighbor based facial emotion recognition using effective features
Benlamoudi Multi-modal and anti-spoofing person identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination