CN111259757A - Image-based living body identification method, device and equipment - Google Patents

Image-based living body identification method, device and equipment Download PDF

Info

Publication number
CN111259757A
CN111259757A CN202010029901.9A CN202010029901A CN111259757A CN 111259757 A CN111259757 A CN 111259757A CN 202010029901 A CN202010029901 A CN 202010029901A CN 111259757 A CN111259757 A CN 111259757A
Authority
CN
China
Prior art keywords
image data
living body
skin color
identified
palm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010029901.9A
Other languages
Chinese (zh)
Other versions
CN111259757B (en
Inventor
徐崴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Labs Singapore Pte Ltd
Original Assignee
Alipay Labs Singapore Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Labs Singapore Pte Ltd filed Critical Alipay Labs Singapore Pte Ltd
Priority to CN202010029901.9A priority Critical patent/CN111259757B/en
Publication of CN111259757A publication Critical patent/CN111259757A/en
Application granted granted Critical
Publication of CN111259757B publication Critical patent/CN111259757B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/759Region-based matching

Abstract

The embodiment of the specification provides a living body identification method, a living body identification device and living body identification equipment based on an image. Acquiring first image data including first face image data of a living body to be recognized and first palm image data of the living body to be recognized; acquiring second image data comprising second face image data of a living body to be identified and second palm image data of the living body to be identified; determining a first skin color similarity according to the first face image data and the first palm image data; determining a second skin color similarity according to the second face image data and the second palm image data; and determining the living body to be identified as a dark skin color living body based on the first skin color similarity and the second skin color similarity.

Description

Image-based living body identification method, device and equipment
Technical Field
One or more embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method, an apparatus, and a device for image-based living body identification.
Background
With the development of deep learning technology, the face recognition technology is mature day by day and is applied in various domestic living fields in China on a large scale, for example: security protection, payment, authentication and other related fields. The domestic online payment platform realizes business scenes such as face-brushing login, face-brushing payment, face-brushing real-name authentication and the like by using a face recognition technology. In these business scenarios, the face recognition technology has become one of the main means for authenticating the identity of a user, and when performing face verification, it is first necessary to recognize that the object to be verified is a living body. The problem that an attacker confuses by shooting a picture of a legal person, recording a video or a wax image to cause information security is solved.
In the prior art, in order to ensure information security, detection and identification are performed through a human face living body detection algorithm trained by human face data. Because the training of the model requires a positive sample and an attack sample (such as a printed photo, a screen shot of a mobile phone screen, and the like), and the presence of a few faces of dark skin color crowds in the model-trained sample causes the characteristics of the dark skin color crowds to be difficult to capture by the model, and further acquisition of the attack sample in the dark skin color crowds is more difficult, in the prior art, the training of the dark skin color crowd face living body recognition model wants to learn the characteristics of the attack sample, the dependence on the sample attacked by a non-living body face is larger, but the acquisition of the attack data of the dark skin color crowd faces is more difficult, so that when the model is applied to authentication recognition in an international business scene, the dark skin color crowds may be recognized as the attack sample, and the situation.
Therefore, there is a need to provide a more reliable living body identification scheme.
Disclosure of Invention
In view of this, one or more embodiments of the present disclosure provide a method, an apparatus, and a device for living body identification based on an image, which are used to reduce a false interception rate of dark skin color people and improve an identification accuracy rate of dark skin color people.
In order to solve the above technical problem, the embodiments of the present specification are implemented as follows:
an image-based living body identification method provided by an embodiment of the present specification includes:
acquiring first image data; the first image data comprises first face image data of a living body to be identified and first palm image data of the living body to be identified, and the first palm image data comprises image data of a palm area of the living body to be identified;
acquiring second image data, wherein the second image data comprises second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data comprises image data of a back area of a hand of the living body to be identified;
determining a first skin color similarity according to the first face image data and the first palm image data;
determining a second skin color similarity according to the second face image data and the second palm image data;
determining that the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity.
An embodiment of the present specification provides an image-based living body identification apparatus, including:
the first image data acquisition module is used for acquiring first image data; the first image data comprises first face image data of a living body to be identified and first palm image data of the living body to be identified, and the first palm image data comprises image data of a palm area of the living body to be identified;
the second image data acquisition module is used for acquiring second image data, wherein the second image data comprises second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data comprises image data of a back area of a hand of the living body to be identified;
the first skin color similarity determining module is used for determining first skin color similarity according to the first face image data and the first palm image data;
the second skin color similarity determining module is used for determining second skin color similarity according to the second face image data and the second palm image data;
and the dark skin color living body determining module is used for determining the living body to be identified as a dark skin color living body based on the first skin color similarity and the second skin color similarity.
An embodiment of the present specification provides an image-based living body identification apparatus, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring first image data; the first image data comprises first face image data of a living body to be identified and first palm image data of the living body to be identified, and the first palm image data comprises image data of a palm area of the living body to be identified;
acquiring second image data, wherein the second image data comprises second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data comprises image data of a back area of a hand of the living body to be identified;
determining a first skin color similarity according to the first face image data and the first palm image data;
determining a second skin color similarity according to the second face image data and the second palm image data;
determining that the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity.
Embodiments of the present specification provide a computer readable medium having stored thereon computer readable instructions executable by a processor to implement an image-based living body identification method.
One embodiment of the present description achieves the following advantageous effects: by acquiring first image data containing face region image data and palm region image data and second image data containing face region image data and back of the hand region image data, determining a first similarity between a face region and a palm center region and a second similarity between the face region and a back region of a hand through first image data and second image data, determining that the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity, training a face living body detection model without acquiring a large amount of face data of dark skin color crowds, and identifying the dark color crowds only based on the characteristic that the face skin color of the dark skin color crowds is similar to the skin color of the back of the hand and the difference between the face skin color and the skin color of the palm center is large, so that the false interception rate of the dark skin color crowds is reduced, and the identification accuracy of the dark skin color crowds is improved; the time for collecting the positive and negative samples of the training model is saved, and the efficiency for identifying the dark skin color living body is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of one or more embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the disclosure and together with the description serve to explain the embodiments of the disclosure and not to limit the embodiments of the disclosure. In the drawings:
fig. 1 is a schematic flowchart of an image-based living body identification method according to an embodiment of the present disclosure;
fig. 2 is a schematic view of a living body verification interface in an image-based living body identification method according to an embodiment of the present disclosure;
fig. 3 is a schematic view of an interface for acquiring a face image and a palm image in an image-based living body identification method according to an embodiment of the present disclosure;
fig. 4 is a schematic view of an interface for acquiring a face image and a back image in an image-based living body identification method provided in an embodiment of the present specification;
FIG. 5 is a schematic structural diagram of an image-based living body identification apparatus corresponding to FIG. 1 provided in an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an image-based living body identification device corresponding to fig. 1 provided in an embodiment of the present specification.
Detailed Description
To make the objects, technical solutions and advantages of one or more embodiments of the present disclosure more apparent, the technical solutions of one or more embodiments of the present disclosure will be described in detail and completely with reference to the specific embodiments of the present disclosure and the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present specification, and not all embodiments. All other embodiments that can be derived by a person skilled in the art from the embodiments given herein without making any creative effort fall within the scope of protection of one or more embodiments of the present specification.
In the field of identity authentication and identification, in order to prevent an attacker from impersonating the identity of a legal user, user identification is completed by printing a photo, high-definition printing, shooting or recording a video by a mobile phone screen, mask attack and the like, so that the information security of the legal user is damaged, and if the detection is performed by a human face living body detection algorithm trained by human face data, the situation of false interception can be caused because dark skin color crowds cannot be identified due to difficulty in acquiring related data of the dark skin color crowds. The method abandons a mode of training a model, and solves the problem that the human face of the dark skin color crowd is difficult to carry out in-vivo detection by taking the characteristics that the skin color of the palm and the back of the hand of the dark skin color crowd has great visual chromatic aberration and the skin color of the back of the hand is close to the skin color of the human face as the visual characteristic of an in-vivo detection algorithm and providing a detection method based on palm overturning, human face and hand chromatic aberration detection and contour matching based on the characteristic.
The technical solutions provided by the embodiments of the present description are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating an image-based living body identification method according to an embodiment of the present disclosure. From the viewpoint of a program, the execution subject of the flow may be a program installed in an application server or an application client.
As shown in fig. 1, the process may include the following steps:
step 102: acquiring first image data; the first image data comprises first face image data of a living body to be recognized and first palm image data of the living body to be recognized, and the first palm image data comprises image data of a palm center area of the living body to be recognized.
The Image Data (Image Data) may be a set of gradation values of each pixel (pixel) expressed as a numerical value. Because the characteristic of the scheme is that the human face skin color of the dark skin color crowd is similar to the skin color of the back of the hand, and the difference between the human face skin color and the skin color of the palm center is large, when the image data is obtained, the image data of the human face, the image data of the palm center and the image data of the back of the hand are required to be obtained. The first image data mentioned here includes first face image data and first palm image data, where the first palm image data includes image data of a palm center region of a living body to be recognized, it is understood that the first image data includes face data and palm center data, but other data is not excluded from the first image data, such as: environmental data other than human faces and human hands, and the like. The first image data, the first face image data, and the first palm image data are used here only for distinguishing from other image data, and "first" here has no other special meaning.
Step 104: and acquiring second image data, wherein the second image data comprises second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data comprises image data of a back area of a hand of the living body to be identified.
The second image data does not include the image data of the palm region but includes the image data of the back region, compared to the first image data in step 102. The second face image data in the second image data and the face image data in the first image data should belong to face image data corresponding to the same living body, and if the face image data does not belong to the living body, the attack characteristic can be considered to exist. The first and second faces are only used for convenience of explaining the scheme, in practical application, the first face data and the second face data do not need to be acquired twice, and if the video image is acquired, the human hand turning state only needs to be tracked, and the human face image data, the palm image data and the hand back image data can be acquired by acquiring the image data of the palm area and the hand back area before and after the human hand is turned.
Step 106: and determining a first skin color similarity according to the first face image data and the first palm image data.
The similarity utilization method can be a method for identifying an object to be identified by determining an evaluation standard value by using a fuzzy comprehensive evaluation principle and taking a plurality of specific relative indexes as a uniform scale to obtain the similarity between the set indexes and the standard values. The step is to determine the skin color similarity between the face region and the palm center region, and when determining the skin color similarity, an algorithm may be used for calculation, or a method other than the algorithm may be used to determine the similarity between the face skin color and the palm center skin color. Determining the skin color similarity can be realized by adopting the following steps: acquiring image parameters → mapping the image from RGB to YCbCr (one of color spaces, Y refers to luminance component, Cb refers to blue chrominance component, and Cr refers to red chrominance component) → building a skin color model → finding a similarity matrix using the skin color model → median filtering → normalizing the similarity. Specifically, when calculating the similarity, various ways may be adopted, such as: the euclidean distance, the cosine distance, and the like are not specifically limited in this embodiment as long as the methods are suitable for calculating the skin color similarity.
When determining the skin color similarity, the average chroma of the face region, the palm center region and the back region may be determined first, and then the similarity of the average chroma among the regions may be determined.
Step 108: and determining a second skin color similarity according to the second face image data and the second palm image data.
This step is to determine the similarity between the skin color of the face and the skin color of the back of the hand, and the determination method may refer to the method in step 106.
Step 110: determining that the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity.
According to the similarity value of the face skin color and the palm center skin color and the similarity value of the face skin color and the hand back skin color, whether the living body to be identified is a dark skin color living body can be determined, and in the determination process, the determination needs to be carried out based on the characteristic that the dark skin color living body has the face skin color similar to the hand back skin color and the face skin color is greatly different from the hand center skin color. The similarity threshold between the human face skin color and the hand back skin color and the similarity threshold between the human face skin color and the palm center skin color can be limited according to actual conditions, and the similarity is not specifically limited by the scheme.
The method of fig. 1, by acquiring first image data including face region image data and palm region image data and second image data including face region image data and back of hand region image data, determining a first similarity between a face region and a palm center region and a second similarity between the face region and a back region of a hand through first image data and second image data, determining that the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity, training a face living body detection model without acquiring a large amount of face data of dark skin color crowd, and identifying the dark color crowd only based on the characteristic that the skin color of the face of the dark skin color crowd is similar to the skin color of the back of the hand and the skin color of the face is greatly different from the skin color of the palm center, the false interception rate of dark skin color crowds is reduced, and therefore the identification accuracy of the dark skin color crowds is improved.
Based on the process of fig. 1, some specific embodiments of the process are also provided in the examples of this specification, which are described below.
In the process of identifying the dark skin color living body, not only the "dark skin color" but also the "living body" needs to be identified, and when the dark skin color is identified, the method in fig. 2 can be adopted to identify, and before the dark skin color is identified, whether the object to be identified is the living body or not can be identified. Specifically, before acquiring the first image data, the method may further include:
acquiring continuous multi-frame image data;
and judging whether the living body to be identified exists in the image corresponding to the multi-frame image data or not according to the multi-frame image data.
Firstly, in the process of identifying a dark skin color living body, a technical means is required to identify and judge whether a User currently using a User Interface Design (UI) is a normal living natural person or a non-living attack (such as a photo, high definition printing, a mobile phone screen, a mask attack, etc.) that falsely emits the identity of the current User. Therefore, whether the living body to be identified exists in the image corresponding to the multi-frame image data can be judged firstly, whether the object to be identified is the dark skin color or not can be identified firstly, whether the object to be identified is the dark skin color or not can be judged firstly, whether the living body to be identified exists in the image corresponding to the multi-frame image data or not can be judged, and the living body to be identified can be selected randomly according to actual conditions.
The continuous multi-frame image data can be a video or a moving picture, and the purpose is to detect whether a living body to be identified exists in an image corresponding to the multi-frame image data, and can also be called a living body identification method based on random action. Therefore, the living body identification is carried out, the random performance effectively improves the attack cost, and prevents the attacker from preparing in advance. Such as: prompt information indicating that the object to be recognized performs a specified action to perform the living body detection may be displayed on the photographing interface. The living body detection can be applied to some identity verification scenes to determine the real physiological characteristics of an object, and the commonly used living body detection methods can be roughly classified into four types, wherein the first type is to detect the inherent characteristics of a human face and comprises blink detection, spectrum analysis and the like. The second type is to detect spoofing attacks by detecting the reflection difference between a live face and a false image under infrared light by using a light source or a sensing device and the like and a thermal image sensor. The third category is to extract feature information from video and audio, where mouth movements and sound are synchronized when a person speaks. The last type is to require the user to do a specified action, and to perform the living body detection by verifying whether the actions are synchronous or not through action judgment. Of course, in the face recognition application, the living body detection may be performed by blinking, by opening mouth, shaking head, nodding head, and/or head-up camera, and by using techniques such as face key point positioning and face tracking, to determine whether the object to be recognized is a living body.
Of course, besides the above mentioned methods, it is also possible to determine whether the object to be identified is a living body by monitoring the palm flipping process of the user, for example: a real-time hand detector based on an SSD neural network is adopted to track the overturning process of the palm of the user so as to verify whether the user is a living body or not. The living body identification methods fall into the protection scope of the scheme. The embodiments of the present specification are not particularly limited to the method of identifying a living body.
By the method, common attack means such as photos, face changing, masks, sheltering and screen copying can be effectively resisted, so that a user is helped to discriminate fraudulent behaviors, and the benefit of the user is guaranteed.
In the method steps of fig. 1, after determining the first skin color similarity between the face region and the palm center region and the second skin color similarity between the face region and the back of the hand region, it may be determined whether the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity, and specifically, the method may include:
judging whether the first skin color similarity is smaller than a first threshold value and the second skin color similarity is larger than or equal to a second threshold value to obtain a first judgment result;
and when the first judgment result shows that the first skin color similarity is smaller than a first threshold value and the second skin color similarity is larger than or equal to a second threshold value, determining that the living body to be identified is a dark skin color living body, wherein the second threshold value is larger than the first threshold value.
Because the skin color of the human face of the dark skin color crowd is similar to the skin color of the back of the hand, and the difference between the skin color of the human face and the skin color of the center of the palm is large, it can be understood that the skin color of the human face area is large in similarity with the skin color of the back of the hand, and the skin color of the human face area is small in similarity with the skin color of the center of the palm, therefore, when the threshold value is set, the first threshold value should be far smaller than the second threshold. For example: the first skin color similarity of a face area and a palm area is determined to be 0.1 according to the first face image data and the first palm image data, the second skin color similarity of the face area and a back area of the hand is determined to be 0.9 according to the second face image data and the second palm image data, the set first threshold value is 0.3, the set second threshold value is 0.7, the set second threshold value is larger than the set first threshold value, at the moment, the first skin color similarity is 0.1 and less than 0.3, and the second skin color similarity is 0.9 and more than 0.7, so that the living body to be identified can be determined to be a dark skin color living body.
It should be further noted that, after the first skin color similarity and the second skin color similarity are determined in the above method, the skin color similarity needs to be compared with a set similarity threshold, where the first threshold and the second threshold may be determined by performing statistics on collected samples or by performing machine learning by using positive and negative samples, and this is not specifically limited in this embodiment of the present specification.
By the method, the living body recognition of the faces of the dark skin color crowd can be realized, and the problem that the false interception rate of the faces of the dark skin color crowd is too high by a conventional living body algorithm due to the particularity of the appearances of the faces of the dark skin color crowd is solved.
In the process of determining the dark skin living body, in order to avoid that different users cooperatively complete authentication (for example, the face of the user a is placed in the face acquisition area, but the acquired hand information is the hand information of the user B), whether the acquired image data belongs to the same person or not can be determined by comparing the face contour similarity and the palm contour similarity with the hand back contour similarity in different image data. The specific method comprises the following steps:
before determining that the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity, the method may further include:
determining a first contour similarity of a face region in the first image data and the second image data according to the first face image data and the second face data;
determining a second contour similarity of the palm center area and the back area according to the first palm image data and the second palm image data;
judging whether the first contour similarity is larger than or equal to a first threshold value and the second contour similarity is larger than or equal to a second threshold value to obtain a first judgment result;
and when the first judgment result shows that the first contour similarity is greater than or equal to a first threshold and the second contour similarity is greater than or equal to a second threshold, determining that the living bodies to be identified corresponding to the first image data and the second image data are the same object.
In an actual application scenario, if the recognized living bodies are the same user, the face contour in the acquired first image data and the face contour in the acquired second image data should be the same, and the palm center contour in the first image data and the hand back contour in the second image data should be symmetrical. When the third threshold and the fourth threshold are set, the third threshold and the fourth threshold may be set according to actual conditions, the thresholds may be determined after statistics is performed on practically collected samples, or may be determined in a machine learning manner by using positive and negative samples, which is not specifically limited in this embodiment of the present specification.
The method for determining contour similarity may refer to the above-mentioned method for determining skin color similarity, and will not be described herein again.
It should be noted that, when comparing the contour similarity, if the contour similarity is defined to be completely the same, the similarity is considered to be the maximum, and the palm image and the back image are symmetrical, so that the similarity between the two images is relatively small; if the contours are defined to be more similar, the similarity is larger, and the similarity between the image of the back of the hand and the image of the palm center is relatively larger. In the scheme, comparing the contour similarity, it can be understood that the contour similarity is considered to be larger as long as the contours are basically similar.
By the method, the face images in the acquired image data can be determined to belong to the same living body, and the accuracy of living body identification can be better ensured.
Before judging the contour similarity, a face region, a palm region and a back region need to be segmented from the acquired image, and the specific implementation process can adopt the following method:
before determining the first skin color similarity according to the first face image data and the first palm image data, the method may further include:
segmenting the first image data and the second image data by adopting a face segmentation algorithm to respectively obtain face regions corresponding to the first image data and the second image data;
and segmenting the first image data and the second image data by adopting a hand segmentation algorithm to respectively obtain a palm center region in the first image data and a back region in the second image data.
The face segmentation algorithm is one of the research contents of computer intelligent information processing and computer vision system, and the face segmentation algorithm mentioned here can be many kinds, for example: the obvious difference between the color of the face and the surrounding environment can be utilized, and the color characteristics of the face are utilized to carry out face segmentation; the motion characteristics of the image can be used for positioning and segmenting the human face, such as: judging the motion of a person in a scene by comparing the difference between adjacent image frames in the motion image sequence, outlining the approximate outline of the person, and further positioning the position of a face image; or adopting a face segmentation algorithm of a gray level image according to the symmetry of the face to carry out the steps of edge extraction, edge thinning, symmetry analysis, face segmentation and the like; face Parsing (Face matching) can also be adopted to segment each part of the Face.
The hand segmentation algorithm is used to solve the hand segmentation problem, which can be regarded as a hand pixel and non-hand pixel labeling problem in the RGB image and depth image obtained by the Kinect sensor. The hand segmentation method mainly comprises a skin color-based hand segmentation algorithm, a motion-based hand segmentation algorithm and a contour-based hand segmentation algorithm.
In the above method, the purpose of using the face segmentation algorithm is to segment the face region, the purpose of using the hand segmentation algorithm is to segment the palm region and the back region, and the specific face segmentation algorithm and hand segmentation algorithm used in the embodiments of the present specification are not specifically limited to this.
By the method, the image data corresponding to the face region, the image data corresponding to the palm region and the image data corresponding to the back region are segmented from the first image data and the second image data, so that the first similarity and the second similarity can be conveniently determined subsequently, and the dark skin living body identification can be performed more quickly and accurately.
In a specific application scene, in order to perform authentication and identification on an object to be identified, prompt information is correspondingly displayed in a terminal equipment interface for acquiring an image, and the object to be identified is prompted to perform corresponding actions, so that authentication and identification are performed. Before acquiring continuous multi-frame image data, prompt information may be displayed, which may specifically include:
displaying first prompt information for prompting an object to be recognized to execute a first specified action, wherein the first specified action is used for detecting whether the object to be recognized is a living body.
The displaying of the first prompt information for prompting the object to be recognized to execute the first specified action may specifically include:
displaying a first shooting frame;
and/or a graphical illustration representing the first specified action;
and/or an animation for representing the first specified action.
In an actual application scene, before continuous multi-frame image data is acquired, prompt information and a shooting frame are displayed on a terminal interface for acquiring images. The first prompt information may be information for prompting the object to be recognized to perform the first specified action. The first prescribed action mentioned here may be an action capable of recognizing whether or not the object to be recognized is a living body, such as: and designating the object to be recognized to blink through a head-up lens, lift the palm and turn the palm facing the camera and/or open the mouth, shake the head, click the head and the like. In particular, reference may be made to the following figures in combination:
fig. 2 is a schematic view of a living body verification interface in an image-based living body identification method according to an embodiment of the present disclosure.
As shown in fig. 2, when living body identification is required, the image acquisition terminal opens a shooting interface and displays first prompt information, where the first prompt information may include a first shooting frame 201 and/or an image description and a text description 202 for representing the first specified action; and/or animation 203 for representing the first specified action, such as: at this time, a text 202 and a first shooting frame 201 can be displayed on the terminal interface, wherein the text is that the head is held in the shooting frame, the left hand is lifted to the same horizontal height as the eyes, and the shooting frame faces the camera to make blinking actions. It should be noted that, in an actual application scenario, the content of the specific prompting message may be set according to an actual situation as long as it is ensured that the user can be prompted to make a specific specified action, and all examples listed in the embodiments of this specification are only for explaining the scheme, and do not play any limiting role in the scheme.
The displayed first shooting frame 201 may be specifically shooting frames in various shapes set according to actual requirements, and only a specific action image executed by the object to be recognized needs to be acquired, for example: assume the first assignment as: the user blinks and lifts up the left hand, makes the action of making a fist, and at this moment, first shooting frame can be circular region, square region or the region of other shapes, only need to guarantee that people's face and left hand can place in first shooting frame can, of course, under general condition, in order to can clearly gather people's face image, when suggestion user makes appointed action, the hand that lifts up should not shelter from people's face. The first photographing frame may include a head portrait photographing frame and a gesture photographing frame, and may be set according to actual conditions, which is not specifically limited in the embodiments of the present specification. When the object to be identified passes the living body identification, the living body verification passing or other prompting information for prompting the user to enter the next stage of verification can be displayed.
After the object to be recognized is verified as a living body, the skin color of the living body to be recognized needs to be further determined, first, a face image and a palm image of the living body to be recognized need to be collected, and at this time, the information display condition of the terminal interface can be described by combining the following drawings:
fig. 3 is a schematic view of an interface for acquiring a face image and a palm image in an image-based living body identification method according to an embodiment of the present disclosure.
Before the acquiring the first image data, the method may further include:
and displaying second prompt information for prompting the living body to be recognized to execute a second specified action, wherein the second prompt information is used for prompting the living body to be recognized to enable the face to face the camera and place the palm of the hand to face the camera.
The displaying of the second prompt information for prompting the living body to be recognized to execute the second specified action may specifically include:
displaying a second photographing frame for photographing a portrait;
and/or a third shooting frame for shooting the palm center;
and/or an image description, a text description for representing the second designated action;
and/or an animation for representing the second specified action.
As shown in fig. 3, after determining that the object to be recognized is a living body, displaying second prompt information on the terminal interface, where the second prompt information may be used to prompt the living body to be recognized to face a human face to the camera and place a palm center of a hand facing the camera, and specifically, the second prompt information may include a second shooting frame 301 for shooting a human face; and/or a third photographing frame 302 for photographing the palm center; and/or an image description, text description 303 for representing the second designated action; and/or an animation 304 for representing the second specified action. The shapes of the second photographing frame 301 and the third photographing frame 302 may be set according to actual conditions, and may be set by referring to the photographing frame in the first presentation information. Of course, the second shooting frame 301 and the third shooting frame 302 are only used for explaining that a face image and a palm image need to be collected here, and in the actual application process, there may be a plurality of shooting frames or only one shooting frame, and only the face image and the palm image need to be collected. The text description in the prompt message may also be limited according to the actual situation, for example: a written description may be displayed: "please place the head and the palm in the corresponding shooting frame", if there is only one shooting frame, the text description can be displayed: please place the head and the palm inside the photographing frame. Specifically, in order to acquire palm image data, the living body to be recognized can be prompted to lift the hand beside the face, the five fingers are opened, and the face is not shielded, so that an image with the face and the palm coexisting is shot.
After the face image data and the palm image are collected, images of the face and the back of the hand which coexist are required to be collected, and when the images are collected, the display interface information can be explained by combining the following drawings:
fig. 4 is a schematic view of an interface for acquiring a face image and a back image in an image-based living body recognition method according to an embodiment of the present disclosure.
Before the acquiring the second image data, the method may further include:
and displaying third prompt information for prompting the living body to be identified to execute a third specified action, wherein the third prompt information is used for prompting the living body to be identified to keep the first specified action unchanged, and placing the back of the hand of the other hand towards the camera. Specifically, displaying second prompt information for prompting the living body to be recognized to execute the third specified action may specifically include:
displaying a fourth photographing frame for photographing a portrait;
and/or a fifth photographing frame for photographing the back of the hand;
and/or an image description, a text description for representing the third designated action;
and/or an animation for representing the second specified action.
As shown in fig. 4, when the face image and the back-of-hand image are acquired. Before acquiring the face image data and the back of hand image data, third prompt information may be displayed on a terminal interface, where the third prompt information may be used to prompt the living body to be recognized to face the face to the camera and place the back of hand to the camera, and specifically, the third prompt information may include a fourth shooting frame 401 for shooting a portrait; and/or a fifth photographing frame 402 for photographing the back of the hand; and/or an image description, a text description 403 for representing the third designated action; and/or an animation 404 for representing the third specified action. The shapes of the fourth photographing frame 401 and the fifth photographing frame 402 may be set according to actual conditions, and may be set by referring to the photographing frame in the first presentation information. Of course, the fourth shooting frame 401 and the fifth shooting frame 402 are only used for explaining that a face image and a hand back image need to be collected here, and in the practical application process, a plurality of shooting frames may be provided, or only one shooting frame may be provided, so that only the face image and the hand back image need to be collected. The text description in the prompt message may also be limited according to the actual situation, for example: a written description may be displayed: please place the head and the back of the hand in the corresponding photographing frame. If there is only one shooting box, the caption can be displayed: please place the head and the back of the hand inside the photographing frame. Specifically, in order to collect image data of the back of the hand, the living body to be recognized can be prompted to lift the hand beside the face, the five fingers are opened, the back of the hand faces the camera, and the face is not shielded, so that images of the face and the back of the hand which coexist are guaranteed to be shot.
By the method, the corresponding prompt information is displayed on the terminal interface to prompt the object to be recognized to make corresponding action, so that the object to be recognized is recognized conveniently, and the efficiency of recognizing the dark skin living body is improved.
In a specific application scenario, after the dark skin living body is identified, the identity of the user may be further verified, for example: the collected face image information can be further compared with face image information stored in the system in advance to determine whether the identity of the living body to be identified is correct or not, and if the identity is correct, the authentication is passed. Correspondingly, prompt information for instructing the user to continue identity authentication operation can be displayed on the terminal interface, and after the authentication is passed, prompt information for indicating that the authentication is passed is displayed on the display interface.
It should be noted that, in a specific application scenario, the method in the foregoing embodiment may be implemented by taking a picture, or by recording a video or taking a moving picture, and when determining whether the object to be identified is a living body, various manners may be adopted. When the object to be recognized is indicated to make the designated action, the time for acquiring the image can be set, and when the designated action is not acquired within the set time, the corresponding prompt information can be displayed again to prompt the object to be recognized to make the designated action according to the prompt. When the palm image data of the living body to be identified is collected, the palm of the living body to be identified can be prompted to face the camera, and the five fingers are opened. Of course, other specified actions may be performed as long as palm image data can be accurately acquired. When acquiring face image data, palm center image data, and back image data, the placement position of the hand may be set according to actual conditions, such as: the mask can be placed beside the face without covering the face, and can also be placed above the head, which is not specifically limited in this description embodiment. In order to better explain the technical solution in the above embodiments, the following specific process steps may be combined for explanation:
assuming that the identity of the user needs to be authenticated in the security scene, the following steps may be adopted to determine whether the user to be identified is a dark skin living body:
1) and prompting the object to be recognized to lift one hand to the horizontal height equal to the eye in the interactive UI, facing a camera of a shooting screen, and blinking. A corresponding recognition algorithm is used to recognize whether the user has performed a blinking action, and if so, the following step 2) is entered. Otherwise, judging the attack as a non-living body.
It should be noted that this step may be a random action specified by the system, and the specified action in the authentication process may be an action that cannot be known by the user in advance. Such as: the designated action may include one or more of blinking, flipping the palm, opening the mouth, shaking the head, and the like.
2) Prompting the object to be recognized to lift the other hand to the side of the face (the hand does not cover the face, and the five fingers are opened), and taking a picture of the face and the palm center.
3) Prompting the object to be recognized to turn the hand in the step 2) (whether the turned hand does not cover the face or not, and the five fingers are opened), and then shooting a picture of the face and the back of the hand.
4) Using the pictures taken in step 2) and step 3), determining whether the living body is a dark skin living body based on the following rules:
A) using a palm segmentation algorithm to extract the hand regions in the palm heart image and the hand back image, wherein the two contours are symmetrical;
B) using a human face segmentation algorithm to scratch off human face regions in the palm-heart image and the hand-back image, wherein the outlines of the palm-heart image and the hand-back image are similar to the average chroma;
C) the average chroma of the hand back area in the hand back image is similar to the average chroma of the human face;
D) the average chroma of the palm center area in the palm center image and the average chroma of the human face cannot be too close.
If the above rules are all satisfied, judging that the object to be identified is a dark skin living body.
Of course, it should be noted that the steps and rules a) -D) in 1) -4) are only used to help explain the specific scheme in the embodiment of the present specification, and do not limit the scope of the scheme, and in an actual application scenario, only the characteristics of "the palm of a dark skin color crowd and the skin color of the back of the hand have a large visual color difference, and the skin color of the back of the hand is close to the skin color of the face" need to be included.
By the method in the embodiment, the characteristics that the skin color of the palm and the back of the hand of people with dark skin color has great visual color difference and the skin color of the back of the hand is similar to the skin color of the face are surrounded. The method for recognizing the living body of the human face of the dark skin color crowd by 'palm overturning, human face and human hand chromatic aberration detection and contour matching' does not need to collect positive and negative samples of the dark skin color living body to train a model, reduces the false interception rate of the dark skin color crowd, and improves the recognition accuracy rate of the dark skin color crowd.
Based on the same idea, the embodiment of the present specification further provides a device corresponding to the above method. Fig. 5 is a schematic structural diagram of an image-based living body identification device corresponding to fig. 1 provided in an embodiment of the present disclosure. As shown in fig. 5, the apparatus may include:
a first image data obtaining module 502, configured to obtain first image data; the first image data comprises first face image data of a living body to be identified and first palm image data of the living body to be identified, and the first palm image data comprises image data of a palm area of the living body to be identified;
a second image data obtaining module 504, configured to obtain second image data, where the second image data includes second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data includes image data of a back area of a hand of the living body to be identified;
a first skin color similarity determining module 506, configured to determine a first skin color similarity according to the first face image data and the first palm image data;
a second skin color similarity determining module 508, configured to determine a second skin color similarity according to the second face image data and the second palm image data;
a dark skin color living body determining module 510, configured to determine that the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity.
Optionally, the apparatus may further include:
the continuous multi-frame image data acquisition module is used for acquiring continuous multi-frame image data;
and the living body to be identified judging module is used for judging whether a living body to be identified exists in the image corresponding to the multi-frame image data according to the multi-frame image data.
Optionally, the dark skin color living body determining module 510 may specifically include:
the judging unit is used for judging whether the first skin color similarity is smaller than a first threshold value and the second skin color similarity is larger than or equal to a second threshold value to obtain a first judging result;
a dark skin color living body determining unit, configured to determine that the living body to be identified is a dark skin color living body when the first determination result indicates that the first skin color similarity is smaller than a first threshold and the second skin color similarity is greater than or equal to a second threshold, where the second threshold is greater than the first threshold.
Optionally, the apparatus may further include:
a first contour similarity determining module, configured to determine, according to the first face image data and the second face data, a first contour similarity of a face region in the first image data and the second image data;
the second contour similarity determining module is used for determining second contour similarity of the palm center area and the back area according to the first palm image data and the second palm image data;
the judging module is used for judging whether the first contour similarity is larger than or equal to a third threshold value and the second contour similarity is larger than or equal to a fourth threshold value to obtain a second judgment result;
and a determining module, configured to determine that living bodies to be identified corresponding to the first image data and the second image data are the same object when the second determination result indicates that the first contour similarity is greater than or equal to a third threshold and the second contour similarity is greater than or equal to a fourth threshold.
Optionally, the apparatus may further include:
a face region segmentation module, configured to segment the first image data and the second image data by using a face segmentation algorithm, so as to obtain face regions corresponding to the first image data and the second image data, respectively;
and the hand region segmentation module is used for segmenting the first image data and the second image data by adopting a hand segmentation algorithm to respectively obtain a palm center region in the first image data and a hand back region in the second image data.
Optionally, the apparatus may further include:
the device comprises a first prompt information display module and a second prompt information display module, wherein the first prompt information display module is used for displaying first prompt information used for prompting an object to be recognized to execute a first specified action, and the first specified action is used for detecting whether the object to be recognized is a living body.
Optionally, the first prompt information display module is specifically configured to:
displaying a first shooting frame;
and/or an image description, a text description for representing the first designated action;
and/or an animation for representing the first specified action.
Optionally, the apparatus may further include:
and the second prompt information display module is used for displaying second prompt information used for prompting the living body to be recognized to execute a second specified action, and the second prompt information is used for prompting the living body to be recognized to enable the face of the person to face the camera and place the palm center of the hand to face the camera.
Optionally, the second prompt information display module may be specifically configured to:
displaying a second photographing frame for photographing a portrait;
and/or a third shooting frame for shooting the palm center;
and/or an image description, a text description for representing the second designated action;
and/or an animation for representing the second specified action.
Optionally, the apparatus may further include:
and the third prompt information display module is used for displaying third prompt information used for prompting the living body to be recognized to execute a third specified action, and the third prompt information is used for prompting the living body to be recognized to keep the face of the human face facing the camera and place the back of the hand facing the camera.
By the aid of the device, the false interception rate of dark skin color crowds is reduced, and accordingly the identification accuracy of the dark skin color crowds is improved; the time for collecting the positive and negative samples of the training model is saved, and the efficiency for identifying the dark skin color living body is improved.
Based on the same idea, the embodiment of the present specification further provides a device corresponding to the above method. Fig. 6 is a schematic structural diagram of an image-based living body identification device corresponding to fig. 1 provided in an embodiment of the present specification. As shown in fig. 6, the apparatus 600 may include:
at least one processor 610; and the number of the first and second groups,
a memory 630 communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory 630 stores instructions 620 executable by the at least one processor 610, the instructions being executed by the at least one processor 610.
The instructions may enable the at least one processor 610 to:
acquiring first image data; the first image data comprises first face image data of a living body to be identified and first palm image data of the living body to be identified, and the first palm image data comprises image data of a palm area of the living body to be identified;
acquiring second image data, wherein the second image data comprises second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data comprises image data of a back area of a hand of the living body to be identified;
determining a first skin color similarity according to the first face image data and the first palm image data;
determining a second skin color similarity according to the second face image data and the second palm image data;
determining that the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity.
Based on the same idea, the embodiment of the present specification further provides a computer-readable medium corresponding to the above method. The computer readable medium has computer readable instructions stored thereon that are executable by a processor to implement the method of:
acquiring first image data; the first image data comprises first face image data of a living body to be identified and first palm image data of the living body to be identified, and the first palm image data comprises image data of a palm area of the living body to be identified;
acquiring second image data, wherein the second image data comprises second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data comprises image data of a back area of a hand of the living body to be identified;
determining a first skin color similarity according to the first face image data and the first palm image data;
determining a second skin color similarity according to the second face image data and the second palm image data;
determining that the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified Programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardsradware (Hardware Description Language), vhjhd (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the various elements may be implemented in the same one or more software and/or hardware implementations in implementing one or more embodiments of the present description.
One skilled in the art will recognize that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
One or more embodiments of the present description are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to one or more embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
One or more embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is merely exemplary of the present disclosure and is not intended to limit one or more embodiments of the present disclosure. Various modifications and alterations to one or more embodiments described herein will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of one or more embodiments of the present specification should be included in the scope of claims of one or more embodiments of the present specification.

Claims (15)

1. An image-based living body identification method, comprising:
acquiring first image data; the first image data comprises first face image data of a living body to be identified and first palm image data of the living body to be identified, and the first palm image data comprises image data of a palm area of the living body to be identified;
acquiring second image data, wherein the second image data comprises second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data comprises image data of a back area of a hand of the living body to be identified;
determining a first skin color similarity according to the first face image data and the first palm image data;
determining a second skin color similarity according to the second face image data and the second palm image data;
determining that the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity.
2. The method of claim 1, further comprising, prior to said acquiring first image data:
acquiring continuous multi-frame image data;
and judging whether the living body to be identified exists in the image corresponding to the multi-frame image data or not according to the multi-frame image data.
3. The method according to claim 1, wherein the determining that the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity specifically includes:
judging whether the first skin color similarity is smaller than a first threshold value and the second skin color similarity is larger than or equal to a second threshold value to obtain a first judgment result;
and when the first judgment result shows that the first skin color similarity is smaller than a first threshold value and the second skin color similarity is larger than or equal to a second threshold value, determining that the living body to be identified is a dark skin color living body, wherein the second threshold value is larger than the first threshold value.
4. The method of claim 1 or 3, further comprising, prior to the determining that the living body to be identified is a dark-skinned living body based on the first skin-tone similarity and the second skin-tone similarity:
determining a first contour similarity of a face region in the first image data and the second image data according to the first face image data and the second face data;
determining a second contour similarity of the palm center area and the back area according to the first palm image data and the second palm image data;
judging whether the first contour similarity is greater than or equal to a third threshold value and the second contour similarity is greater than or equal to a fourth threshold value to obtain a second judgment result;
and when the second judgment result shows that the first contour similarity is greater than or equal to a third threshold and the second contour similarity is greater than or equal to a fourth threshold, determining that the living bodies to be identified corresponding to the first image data and the second image data are the same object.
5. The method of claim 1, further comprising, prior to said determining a first skin color similarity from said first face image data and said first palm image data:
segmenting the first image data and the second image data by adopting a face segmentation algorithm to respectively obtain face regions corresponding to the first image data and the second image data;
and segmenting the first image data and the second image data by adopting a hand segmentation algorithm to respectively obtain a palm center region in the first image data and a back region in the second image data.
6. The method of claim 2, further comprising, prior to said acquiring a plurality of consecutive frames of image data:
displaying first prompt information for prompting an object to be recognized to execute a first specified action, wherein the first specified action is used for detecting whether the object to be recognized is a living body.
7. The method according to claim 6, wherein the displaying of the first prompt information for prompting the object to be recognized to execute the first specific action specifically includes:
displaying a first shooting frame;
and/or an image description, a text description for representing the first designated action;
and/or an animation for representing the first specified action.
8. The method of claim 1, prior to said acquiring first image data, further comprising:
and displaying second prompt information for prompting the living body to be recognized to execute a second specified action, wherein the second prompt information is used for prompting the living body to be recognized to enable the face to face the camera and place the palm of the hand to face the camera.
9. The method according to claim 8, wherein the displaying of the second prompting information for prompting the living body to be recognized to perform the second specified action includes:
displaying a second photographing frame for photographing a portrait;
and/or a third shooting frame for shooting the palm center;
and/or an image description, a text description for representing the second designated action;
and/or an animation for representing the second specified action.
10. The method of claim 1, further comprising, prior to said acquiring second image data:
and displaying third prompt information for prompting the living body to be recognized to execute a third specified action, wherein the third prompt information is used for prompting the living body to be recognized to keep the face facing the camera and place the back of the hand facing the camera.
11. An image-based living body identification apparatus comprising:
the first image data acquisition module is used for acquiring first image data; the first image data comprises first face image data of a living body to be identified and first palm image data of the living body to be identified, and the first palm image data comprises image data of a palm area of the living body to be identified;
the second image data acquisition module is used for acquiring second image data, wherein the second image data comprises second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data comprises image data of a back area of a hand of the living body to be identified;
the first skin color similarity determining module is used for determining first skin color similarity according to the first face image data and the first palm image data;
the second skin color similarity determining module is used for determining second skin color similarity according to the second face image data and the second palm image data;
and the dark skin color living body determining module is used for determining the living body to be identified as a dark skin color living body based on the first skin color similarity and the second skin color similarity.
12. The apparatus of claim 11, the apparatus further comprising:
the continuous multi-frame image data acquisition module is used for acquiring continuous multi-frame image data;
and the living body to be identified judging module is used for judging whether a living body to be identified exists in the image corresponding to the multi-frame image data according to the multi-frame image data.
13. The apparatus according to claim 11, wherein the dark skin color liveness determination module specifically includes:
the first judging unit is used for judging whether the first skin color similarity is smaller than a first threshold value and the second skin color similarity is larger than or equal to a second threshold value to obtain a first judging result;
a dark skin color living body determining unit, configured to determine that the living body to be identified is a dark skin color living body when the first determination result indicates that the first skin color similarity is smaller than a first threshold and the second skin color similarity is greater than or equal to a second threshold, where the second threshold is greater than the first threshold.
14. An image-based living body identification apparatus comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring first image data; the first image data comprises first face image data of a living body to be identified and first palm image data of the living body to be identified, and the first palm image data comprises image data of a palm area of the living body to be identified;
acquiring second image data, wherein the second image data comprises second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data comprises image data of a back area of a hand of the living body to be identified;
determining a first skin color similarity according to the first face image data and the first palm image data;
determining a second skin color similarity according to the second face image data and the second palm image data;
determining that the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity.
15. A computer readable medium having stored thereon computer readable instructions executable by a processor to implement the image-based living body identification method according to any one of claims 1 to 10.
CN202010029901.9A 2020-01-13 2020-01-13 Living body identification method, device and equipment based on image Active CN111259757B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010029901.9A CN111259757B (en) 2020-01-13 2020-01-13 Living body identification method, device and equipment based on image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010029901.9A CN111259757B (en) 2020-01-13 2020-01-13 Living body identification method, device and equipment based on image

Publications (2)

Publication Number Publication Date
CN111259757A true CN111259757A (en) 2020-06-09
CN111259757B CN111259757B (en) 2023-06-20

Family

ID=70950438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010029901.9A Active CN111259757B (en) 2020-01-13 2020-01-13 Living body identification method, device and equipment based on image

Country Status (1)

Country Link
CN (1) CN111259757B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797735A (en) * 2020-06-22 2020-10-20 深圳壹账通智能科技有限公司 Face video recognition method, device, equipment and storage medium
CN113194323A (en) * 2021-04-27 2021-07-30 口碑(上海)信息技术有限公司 Information interaction method, multimedia information interaction method and device
CN113569691A (en) * 2021-07-19 2021-10-29 新疆爱华盈通信息技术有限公司 Human head detection model generation method and device, human head detection model and human head detection method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013131407A1 (en) * 2012-03-08 2013-09-12 无锡中科奥森科技有限公司 Double verification face anti-counterfeiting method and device
CN104765440A (en) * 2014-01-02 2015-07-08 株式会社理光 Hand detecting method and device
CN104951940A (en) * 2015-06-05 2015-09-30 西安理工大学 Mobile payment verification method based on palmprint recognition
CN107066983A (en) * 2017-04-20 2017-08-18 腾讯科技(上海)有限公司 A kind of auth method and device
CN107223258A (en) * 2017-03-31 2017-09-29 中控智慧科技股份有限公司 Image-pickup method and equipment
WO2017181769A1 (en) * 2016-04-21 2017-10-26 腾讯科技(深圳)有限公司 Facial recognition method, apparatus and system, device, and storage medium
CN108416338A (en) * 2018-04-28 2018-08-17 深圳信息职业技术学院 A kind of non-contact palm print identity authentication method
WO2019127262A1 (en) * 2017-12-28 2019-07-04 深圳前海达闼云端智能科技有限公司 Cloud end-based human face in vivo detection method, electronic device and program product
CN109976519A (en) * 2019-03-14 2019-07-05 浙江工业大学 A kind of interactive display unit and its interactive display method based on augmented reality
US20190362171A1 (en) * 2018-05-25 2019-11-28 Beijing Kuangshi Technology Co., Ltd. Living body detection method, electronic device and computer readable medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013131407A1 (en) * 2012-03-08 2013-09-12 无锡中科奥森科技有限公司 Double verification face anti-counterfeiting method and device
CN104765440A (en) * 2014-01-02 2015-07-08 株式会社理光 Hand detecting method and device
CN104951940A (en) * 2015-06-05 2015-09-30 西安理工大学 Mobile payment verification method based on palmprint recognition
WO2017181769A1 (en) * 2016-04-21 2017-10-26 腾讯科技(深圳)有限公司 Facial recognition method, apparatus and system, device, and storage medium
CN107223258A (en) * 2017-03-31 2017-09-29 中控智慧科技股份有限公司 Image-pickup method and equipment
CN107066983A (en) * 2017-04-20 2017-08-18 腾讯科技(上海)有限公司 A kind of auth method and device
WO2019127262A1 (en) * 2017-12-28 2019-07-04 深圳前海达闼云端智能科技有限公司 Cloud end-based human face in vivo detection method, electronic device and program product
CN108416338A (en) * 2018-04-28 2018-08-17 深圳信息职业技术学院 A kind of non-contact palm print identity authentication method
US20190362171A1 (en) * 2018-05-25 2019-11-28 Beijing Kuangshi Technology Co., Ltd. Living body detection method, electronic device and computer readable medium
CN109976519A (en) * 2019-03-14 2019-07-05 浙江工业大学 A kind of interactive display unit and its interactive display method based on augmented reality

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
NOOR AMJED; FATIMAH KHALID; RAHMITA WIRZA O.K. RAHMAT; HIZMAWATI BINT MADZIN: "A Robust Geometric Skin Colour Face Detection Method under Unconstrained Environment of Smartphone Database" *
SEOKHOON KANGBYOUNGJO CHOIDONGHW JO: ""Faces detection method based on skin color modeling"" *
YUSEOK BANSANG-KI KIMSANGYOUN LEE: ""Face detection based on skin color likelihood"" *
范文兵,朱连杰: "一种基于肤色特征提取的手势检测识别方法" *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797735A (en) * 2020-06-22 2020-10-20 深圳壹账通智能科技有限公司 Face video recognition method, device, equipment and storage medium
CN113194323A (en) * 2021-04-27 2021-07-30 口碑(上海)信息技术有限公司 Information interaction method, multimedia information interaction method and device
CN113194323B (en) * 2021-04-27 2023-11-10 口碑(上海)信息技术有限公司 Information interaction method, multimedia information interaction method and device
CN113569691A (en) * 2021-07-19 2021-10-29 新疆爱华盈通信息技术有限公司 Human head detection model generation method and device, human head detection model and human head detection method

Also Published As

Publication number Publication date
CN111259757B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
TWI687832B (en) Biometric system and computer-implemented method for biometrics
CN107886032B (en) Terminal device, smart phone, authentication method and system based on face recognition
JP7165742B2 (en) LIFE DETECTION METHOD AND DEVICE, ELECTRONIC DEVICE, AND STORAGE MEDIUM
EP3163500A1 (en) Method and device for identifying region
WO2020018359A1 (en) Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
US20190138748A1 (en) Removing personally identifiable data before transmission from a device
US20200279120A1 (en) Method, apparatus and system for liveness detection, electronic device, and storage medium
US20170124719A1 (en) Method, device and computer-readable medium for region recognition
CN111259757B (en) Living body identification method, device and equipment based on image
EP3362942B1 (en) Electronic devices with improved iris recognition and methods thereof
WO2019137178A1 (en) Face liveness detection
CN107844742B (en) Facial image glasses minimizing technology, device and storage medium
CN108416291B (en) Face detection and recognition method, device and system
CN111353404B (en) Face recognition method, device and equipment
CN112733802B (en) Image occlusion detection method and device, electronic equipment and storage medium
CN104063709B (en) Sight line detector and method, image capture apparatus and its control method
WO2024001095A1 (en) Facial expression recognition method, terminal device and storage medium
JP2020518879A (en) Detection system, detection device and method thereof
CN104823201A (en) Illumination sensitive face recognition
Tsai et al. Robust in-plane and out-of-plane face detection algorithm using frontal face detector and symmetry extension
Harichandana et al. PrivPAS: A real time Privacy-Preserving AI System and applied ethics
Liu et al. Presentation attack detection for face in mobile phones
US10860834B2 (en) Enhanced biometric privacy
CN111680670A (en) Cross-mode human head detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant