CN111259757B - Living body identification method, device and equipment based on image - Google Patents

Living body identification method, device and equipment based on image Download PDF

Info

Publication number
CN111259757B
CN111259757B CN202010029901.9A CN202010029901A CN111259757B CN 111259757 B CN111259757 B CN 111259757B CN 202010029901 A CN202010029901 A CN 202010029901A CN 111259757 B CN111259757 B CN 111259757B
Authority
CN
China
Prior art keywords
image data
living body
identified
skin color
palm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010029901.9A
Other languages
Chinese (zh)
Other versions
CN111259757A (en
Inventor
徐崴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Labs Singapore Pte Ltd
Original Assignee
Alipay Labs Singapore Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Labs Singapore Pte Ltd filed Critical Alipay Labs Singapore Pte Ltd
Priority to CN202010029901.9A priority Critical patent/CN111259757B/en
Publication of CN111259757A publication Critical patent/CN111259757A/en
Application granted granted Critical
Publication of CN111259757B publication Critical patent/CN111259757B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/759Region-based matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The embodiment of the specification provides a living body identification method, device and equipment based on images. Acquiring first image data comprising first face image data of a living body to be identified and first palm image data of the living body to be identified; acquiring second image data including second face image data of a living body to be identified and second palm image data of the living body to be identified; determining a first skin color similarity according to the first face image data and the first palm image data; determining a second skin color similarity according to the second face image data and the second palm image data; and determining that the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity.

Description

Living body identification method, device and equipment based on image
Technical Field
One or more embodiments of the present disclosure relate to the field of computer technology, and in particular, to a method, an apparatus, and a device for image-based living body identification.
Background
With the development of deep learning technology, face recognition technology is also becoming mature and is applied in various domestic areas in China on a large scale, for example: security, payment, authentication, and the like. The domestic online payment platform realizes business scenes such as face brushing login, face brushing payment, face brushing real-name authentication and the like by utilizing a face recognition technology. In these business scenarios, face recognition technology has become one of the main means for authenticating the identity of a user, and when face verification is performed, it is first required to identify the verified object as a living body. The problem of information security caused by confusion of an attacker by taking pictures of legal persons, recording videos or wax images is avoided.
In the prior art, in order to ensure information security, detection and identification are performed through a human face living body detection algorithm trained by human face data. Because the training of the model requires a positive sample and an attack sample (such as a print photo, a screen shot photo of a mobile phone and the like), and the appearance of the faces of the dark skin color crowd in the model training sample is not much, the characteristics of the dark skin color crowd are difficult to capture by the model, and further the attack sample in the dark skin color crowd is more difficult to collect, so that the living body recognition model of the face of the dark skin color crowd in the prior art trains the characteristics of the attack sample, the sample dependence on the non-living body face attack is larger, but the acquisition of the attack data of the face of the dark skin color crowd is more difficult, and the dark skin color crowd can be recognized as the attack sample when the model is applied to authentication and recognition in an international business scene, so that the condition of interception is caused.
Accordingly, there is a need to provide a more reliable living body identification scheme.
Disclosure of Invention
In view of this, one or more embodiments of the present disclosure provide a method, an apparatus, and a device for image-based living body identification, which are used to reduce the false interception rate of dark skin people and improve the identification accuracy of dark skin people.
In order to solve the above technical problems, the embodiments of the present specification are implemented as follows:
the living body identification method based on the image provided by the embodiment of the specification comprises the following steps:
acquiring first image data; the first image data comprises first face image data of a living body to be identified and first palm image data of the living body to be identified, wherein the first palm image data comprises image data of a palm center area of the living body to be identified;
acquiring second image data, wherein the second image data comprises second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data comprises image data of a back hand area of the living body to be identified;
determining a first skin color similarity according to the first face image data and the first palm image data;
determining a second skin color similarity according to the second face image data and the second palm image data;
and determining that the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity.
The embodiment of the present specification provides an image-based living body recognition apparatus, including:
The first image data acquisition module is used for acquiring first image data; the first image data comprises first face image data of a living body to be identified and first palm image data of the living body to be identified, wherein the first palm image data comprises image data of a palm center area of the living body to be identified;
a second image data acquisition module, configured to acquire second image data, where the second image data includes second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data includes image data of a back hand area of the living body to be identified;
the first skin color similarity determining module is used for determining first skin color similarity according to the first face image data and the first palm image data;
the second skin color similarity determining module is used for determining second skin color similarity according to the second face image data and the second palm image data;
and the deep skin color living body determining module is used for determining that the living body to be identified is a deep skin color living body based on the first skin color similarity and the second skin color similarity.
An image-based living body recognition apparatus provided in an embodiment of the present specification includes:
At least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring first image data; the first image data comprises first face image data of a living body to be identified and first palm image data of the living body to be identified, wherein the first palm image data comprises image data of a palm center area of the living body to be identified;
acquiring second image data, wherein the second image data comprises second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data comprises image data of a back hand area of the living body to be identified;
determining a first skin color similarity according to the first face image data and the first palm image data;
determining a second skin color similarity according to the second face image data and the second palm image data;
and determining that the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity.
Embodiments of the present disclosure provide a computer readable medium having stored thereon computer readable instructions executable by a processor to implement an image-based in vivo identification method.
One embodiment of the present specification achieves the following advantageous effects: the method comprises the steps of obtaining first image data containing face area image data and palm area image data and second image data containing face area image data and back of hand area image data, determining first similarity between a face area and a palm area and second similarity between a face area and a back of hand area through the first image data and the second image data, determining that a living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity, and training a face living body detection model without obtaining a large amount of face data of a dark skin color crowd, wherein the face living body detection model can identify the dark skin color crowd only based on the characteristic that the face skin color of the dark skin color crowd is similar to the back of hand skin color and the difference between the face skin color and the palm skin color is large, so that the false interception rate of the dark skin color crowd is reduced, and the identification accuracy of the dark skin color crowd is improved; the time for collecting positive and negative samples of the training model is saved, and the efficiency of identifying deep skin color living bodies is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of one or more embodiments of the specification, illustrate and explain one or more embodiments of the specification, and are not an undue limitation on the one or more embodiments of the specification. In the drawings:
fig. 1 is a schematic flow chart of an image-based living body recognition method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a living experience authentication interface in an image-based living body recognition method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a face image and palm image acquisition interface in an image-based living body recognition method according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a face image and a back image acquisition interface in an image-based living body recognition method according to an embodiment of the present disclosure;
fig. 5 is a schematic structural view of an image-based living body recognition apparatus corresponding to fig. 1 provided in the embodiment of the present specification;
fig. 6 is a schematic structural view of an image-based living body recognition apparatus corresponding to fig. 1 provided in the embodiment of the present specification.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of one or more embodiments of the present specification more clear, the technical solutions of one or more embodiments of the present specification will be clearly and completely described below in connection with specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without undue burden, are intended to be within the scope of one or more embodiments herein.
In the field of identity authentication and identification, in order to prevent an attacker from impersonating the identity of a legal user, user identification is completed by printing photos, high-definition printing, shooting or recording videos on a mobile phone screen, mask attack and the like, so that the information security of the legal user is damaged, and if the legal user is detected by a face living detection algorithm trained by face data, the situation that dark color crowd cannot be identified and is intercepted by mistake is caused because relevant data of the dark color crowd are difficult to acquire. The scheme abandons the mode of training the model, and solves the problem that the human face of the deep skin color crowd is difficult to detect in vivo by taking the characteristic that the skin color of the palm and the back of the hand have great visual chromatic aberration and the skin color of the back of the hand is similar to the skin color of the human face as the visual characteristic of a living body detection algorithm and putting forward a detection method based on palm overturning, human face and hand chromatic aberration detection and contour matching based on the characteristic.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of an image-based living body recognition method according to an embodiment of the present disclosure. From the program perspective, the execution subject of the flow may be a program or an application client that is installed on an application server.
As shown in fig. 1, the process may include the steps of:
step 102: acquiring first image data; the first image data comprises first face image data of a living body to be identified and first palm image data of the living body to be identified, and the first palm image data comprises image data of a palm center area of the living body to be identified.
The Image Data (Image Data) may be a set of gray values of each pixel (pixel) represented by a numerical value. Because the characteristic based on the scheme is that the human face complexion of the dark complexion crowd is similar to the hand back complexion and the difference between the human face complexion and the palm complexion is large, the image data of the human face, the image data of the palm and the image data of the hand back need to be acquired when the image data is acquired. The first image data includes first face image data and first palm image data, where the first palm image data includes image data of a palm area of the living body to be identified, and it is understood that the first image data includes face data and palm data, but other data, such as: environmental data other than faces and hands, and the like. The first image data, the first face image data, and the first palm image data are used herein only for distinguishing from other image data, and the "first" herein has no other special meaning.
Step 104: and acquiring second image data, wherein the second image data comprises second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data comprises image data of a back hand area of the living body to be identified.
The second image data does not include the image data of the palm area, but includes the image data of the back hand area, as compared with the first image data in step 102. The second face image data in the second image data and the face image data in the first image data should belong to face image data corresponding to the same living body, and if not, it can be considered that an attack feature exists. The first and second face data may be acquired without two times, and if the state of the turning of the hand is only required to be tracked under the condition of video image acquisition, and the face image data, the palm image data and the back image data may be acquired by acquiring the image data of the palm area and the back area before and after the turning of the hand.
Step 106: and determining a first skin color similarity according to the first face image data and the first palm image data.
The method of utilizing the similarity can be a method of identifying by taking a plurality of specific relative indexes as a unified scale, determining an evaluation standard value by using a fuzzy comprehensive evaluation principle, and obtaining the similarity between the object to be identified and the standard value on the set indexes. The step is to determine the skin color similarity of the face area and the palm area, and when the skin color similarity is determined, an algorithm can be adopted to calculate, and other modes besides the algorithm can also be adopted to determine the similarity of the face skin color and the palm skin color. The skin color similarity determination can be realized by adopting the following steps: obtaining image parameters, mapping an image from RGB to YCbCr (one of color spaces, Y refers to a luminance component, cb refers to a blue chrominance component, and Cr refers to a red chrominance component), building a skin color model, obtaining a similarity matrix by using the skin color model, median filtering, and normalizing the similarity. Specifically, in calculating the similarity, various manners may be adopted, such as: the euclidean distance, cosine distance, and the like belong to the protection scope of the embodiments of the present specification as long as the method is suitable for calculating the skin color similarity, and the present solution is not particularly limited.
When the skin color similarity is determined, the average chromaticity of the face area, the palm area and the back area can be determined first, and then the similarity of the average chromaticity among the areas can be determined.
Step 108: and determining a second skin color similarity according to the second face image data and the second palm image data.
This step is to determine the similarity of the skin tone of the face and the skin tone of the back of the hand, and the determination method may refer to the manner in step 106.
Step 110: and determining that the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity.
According to the similarity value of the human face skin color and the palm skin color and the similarity value of the human face skin color and the hand back skin color, whether the living body to be identified is a deep skin color living body or not can be determined, and in the determining process, the determination needs to be performed based on the characteristic that the deep skin color living body has the human face skin color similar to the hand back skin color and the human face skin color has a large difference with the hand palm skin color. As for the similarity threshold value of the skin color of the face and the skin color of the back of the hand and the similarity threshold value of the skin color of the face and the skin color of the palm can be defined according to practical situations, and the scheme is not particularly limited.
In the method in fig. 1, by acquiring first image data including face area image data and palm area image data and second image data including face area image data and back of hand area image data, and determining first similarity between a face area and a palm area and second similarity between a face area and a back of hand area through the first image data and the second image data, the living body to be identified is determined to be a dark skin color living body based on the first skin color similarity and the second skin color similarity, a face living body detection model is not required to be trained by acquiring a large amount of face data of a dark skin color crowd, and the false interception rate of the dark skin color crowd can be reduced only by the characteristic that the face skin color of the dark skin color crowd is similar to the back of hand skin color and the face skin color is greatly different from the palm skin color when the dark skin color crowd is identified, so that the identification accuracy of the dark skin color crowd is improved.
The examples of the present specification also provide some specific embodiments of the method based on the method of fig. 1, which is described below.
In the process of identifying the deep skin color living body, the deep skin color is required to be identified, the living body is required to be identified, the method in the figure 2 can be adopted when the deep skin color is identified, and whether the object to be identified is the living body can be identified before the deep skin color is identified. Specifically, before the first image data is acquired, it may further include:
acquiring continuous multi-frame image data;
judging whether a living body to be identified exists in an image corresponding to the multi-frame image data according to the multi-frame image data.
Firstly, in the process of performing deep skin color living body recognition, a technical means is required to be used for recognizing and judging whether a user currently using a face-scanning interface design (User Interface Design, abbreviated as UI) is a normal living body natural person or a non-living body attack (such as a photo, high-definition printing, a mobile phone screen, a mask attack and the like) which imitates the identity of the current user. Therefore, whether the living body to be identified exists in the image corresponding to the multi-frame image data can be judged first, whether the object to be identified is dark skin color can be identified first, whether the living body to be identified exists in the image corresponding to the multi-frame image data can be judged, and the living body to be identified can be selected optionally according to actual conditions.
The continuous multi-frame image data can be a video or a moving picture, and the purpose of the continuous multi-frame image data is to detect whether a living body to be identified exists in an image corresponding to the multi-frame image data, or can be called a living body identification method based on random actions, which means that during living body detection, a system randomly selects actions to require a user to finish on site, and a visual algorithm is used for judging whether the user does so or not. The living body identification is carried out by the method, the random performance of the living body identification effectively improves the attack cost, and prevents an attacker from preparing in advance. Such as: prompt information can be displayed on the shooting interface to instruct the object to be identified to make a specified action to perform living body detection. The living body detection can be applied in some identity verification scenes to determine the real physiological characteristics of the object, and common living body detection methods can be roughly divided into four types, wherein the first type is to detect the inherent characteristics of the human face, including blink detection, spectrum analysis and the like. The second type is that a thermal image sensor detects spoofing attacks by detecting reflection distinction of a living face and a false image under infrared light using a light source or a sensing device or the like. The third category is to extract characteristic information from video and audio, and mouth movement and sound are synchronized when a person speaks. The last category is to require the user to make a specified action, by which a decision is made to verify whether they are synchronized for living detection. Of course, in face recognition applications, living body detection may be performed by eye blink, mouth opening, head shaking, head pointing, head-up camera, etc., and techniques such as face key point positioning and face tracking may be used to determine whether an object to be recognized is a living body.
Of course, in addition to the above-mentioned method, it is also possible to determine whether or not the object to be recognized is a living body by monitoring the palm-turning process of the user, for example: and a real-time hand detector based on the SSD neural network is adopted to track the overturning process of the palm of the user so as to verify whether the user is a living body or not. These living body identification methods fall within the scope of the present solution. The method of identifying a living body according to the embodiment of the present specification is not particularly limited.
The method can effectively resist common attack means such as photos, face changing, masks, shielding, screen flipping and the like, thereby helping users to discriminate fraudulent behaviors and guaranteeing the benefits of the users.
In the method step of fig. 1, after determining the first skin color similarity of the face region and the palm region and the second skin color similarity of the face region and the back hand region, it may be determined whether the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity, and specifically may include:
judging whether the first skin color similarity is smaller than a first threshold value and the second skin color similarity is larger than or equal to a second threshold value, and obtaining a first judging result;
and when the first judging result shows that the first skin color similarity is smaller than a first threshold value and the second skin color similarity is larger than or equal to a second threshold value, determining that the living body to be identified is a deep skin color living body, wherein the second threshold value is larger than the first threshold value.
Because the skin colors of the dark skin color crowd are similar to the skin colors of the back of the hand, the difference between the skin colors of the face and the skin colors of the palm is larger, and the fact that the skin colors of the face area are similar to the skin colors of the back of the hand is understood to mean that the skin colors of the face area are similar to the skin colors of the palm area is smaller, the first threshold value is far smaller than the second threshold value when the threshold value is set. For example: the first skin color similarity of the face area and the palm center area is determined to be 0.1 according to the first face image data and the first palm image data, the second skin color similarity of the face area and the back hand area is determined to be 0.9 according to the second face image data and the second palm image data, the set first threshold value is 0.3, the second threshold value is 0.7, and the second threshold value is larger than the first threshold value, at the moment, the first skin color similarity is 0.1 < 0.3 and the second skin color similarity is 0.9 > 0.7, so that the living body to be identified can be determined to be a deep skin color living body.
It should be further noted that, after the first skin color similarity and the second skin color similarity are determined in the above method, the skin color similarity needs to be compared with a set similarity threshold, where the first threshold and the second threshold may be determined by performing statistics on samples collected by practice, or may be determined by performing machine learning by using positive and negative samples, which is not limited in this embodiment of the present disclosure.
By the method, the living body identification of the faces of the deep skin color crowd can be realized, and the problem that the false interception rate of the faces of the deep skin color crowd is too high due to the specificity of the appearance of the faces of the deep skin color crowd in a conventional living body algorithm is solved.
In the process of determining the dark skin living body, in order to avoid that different users cooperatively finish authentication (for example, the face of the user A is placed in a face acquisition area, but the acquired hand information is the hand information of the user B), whether the acquired image data belongs to the same person can be determined by comparing the face contour similarity, the palm contour similarity and the back hand contour similarity in different image data. The specific method comprises the following steps:
before the determining that the living body to be identified is a deep skin tone living body based on the first skin tone similarity and the second skin tone similarity, the method may further include:
determining a first contour similarity of a face region in the first image data and the second image data according to the first face image data and the second face data;
determining a second contour similarity of the palm center region and the back hand region according to the first palm image data and the second palm image data;
Judging whether the first contour similarity is greater than or equal to a first threshold value and the second contour similarity is greater than or equal to a second threshold value, and obtaining a first judgment result;
and when the first judgment result shows that the first contour similarity is greater than or equal to a first threshold value and the second contour similarity is greater than or equal to a second threshold value, determining that the living bodies to be identified, corresponding to the first image data and the second image data, are the same object.
In a practical application scenario, if the identified living body is the same user, the face contours in the acquired first image data and second image data should be the same, and the palm contour in the first image data and the back hand contour in the second image data should be symmetrical. When the third threshold and the fourth threshold are set, the setting may be performed according to the actual situation, and the threshold may be determined by performing statistics on the collected samples, or may be determined by performing machine learning by using the positive and negative samples, which is not specifically limited in the embodiment of the present disclosure.
The method for determining the contour similarity may refer to the aforementioned method for determining the skin color similarity, and will not be described herein.
It should be noted that, when comparing the profile similarity, if the profile similarity is completely the same when defining the similarity, the similarity is considered to be the largest, and the similarity between the palm image and the back image is relatively smaller because the palm image and the back image are symmetrical; if defined as the more similar the contour, the more similar the back hand image and the palm image will be. In this scheme, the profile similarity is compared, and it can be understood that the profile similarity is larger as long as the profiles are basically similar.
By the method, the fact that the face images in the acquired image data belong to the same living body can be determined, and accuracy of living body identification can be better guaranteed.
Before the contour similarity is judged, the face area, the palm center area and the back area are also required to be segmented from the acquired image, and the specific implementation process can be realized by adopting the following method:
before the determining the first skin color similarity according to the first face image data and the first palm image data, the method may further include:
dividing the first image data and the second image data by adopting a face segmentation algorithm to respectively obtain face areas corresponding to the first image data and the second image data;
And dividing the first image data and the second image data by adopting a hand segmentation algorithm to respectively obtain a palm area in the first image data and a back area in the second image data.
Face segmentation algorithms are one of the research contents of computer intelligent information processing and computer vision systems, and there are a wide variety of face segmentation algorithms mentioned here, for example: the color of the face can be obviously distinguished from the surrounding environment, and the color characteristics of the face are utilized to segment the face; the motion characteristics of the image can be used for positioning and segmenting the human face, such as: judging the motion of a person in a scene by comparing the difference values between adjacent image frames in the moving image sequence, outlining the rough outline of the person, and then further positioning the position of a face image; the steps of edge extraction, edge refinement, symmetry analysis, face segmentation and the like can also be carried out by adopting a face segmentation algorithm of a gray level image according to the face symmetry; face Parsing (Face segmentation) may also be used to segment out various parts of the Face.
The hand segmentation algorithm is used to solve the hand segmentation problem, which can be seen as a hand pixel and non-hand pixel labeling problem in RGB images and depth images obtained by the Kinect sensor. The hand segmentation method mainly comprises a hand segmentation algorithm based on skin color, a hand segmentation algorithm based on motion and a hand segmentation algorithm based on outline.
In the above method, the purpose of the face segmentation algorithm is to segment the face region, and the purpose of the hand segmentation algorithm is to segment the palm region and the back region, and the specific face segmentation algorithm and the hand segmentation algorithm are not specifically limited in this embodiment of the present specification.
By the method, the image data corresponding to the face area, the image data corresponding to the palm area and the image data corresponding to the back hand area are segmented from the first image data and the second image data, so that the first similarity and the second similarity can be conveniently and subsequently determined, and the deep skin color living body identification can be more rapidly and accurately carried out.
In a specific application scene, in order to perform authentication and identification on an object to be identified, prompt information is correspondingly displayed in a terminal equipment interface for collecting images, and the object to be identified is prompted to perform corresponding actions, so that authentication and identification are performed. Before the continuous multi-frame image data is acquired, prompt information may be displayed first, which may specifically include:
and displaying first prompt information for prompting the object to be identified to execute a first specified action, wherein the first specified action is used for detecting whether the object to be identified is a living body or not.
The displaying the first prompt information for prompting the object to be identified to execute the first specified action may specifically include:
displaying a first shooting frame;
and/or an image specification for representing the first specified action;
and/or an animation for representing the first specified action.
In an actual application scene, before continuous multi-frame image data is acquired, prompt information and a shooting frame are displayed on a terminal interface for acquiring images. The first prompt information may be information for prompting the object to be recognized to perform the first specified action. The first specified action mentioned here may be an action capable of recognizing whether or not the object to be recognized is a living body, such as: and (3) designating actions such as blinking of the mirror head of the object to be identified, lifting the palm to turn over and/or open the mouth, shaking the head, nodding and the like of the camera. Specifically, the description may be made in connection with the following drawings:
fig. 2 is a schematic diagram of a living experience authentication interface in a living body identification method based on an image according to an embodiment of the present disclosure.
As shown in fig. 2, when living body identification is required, the image acquisition terminal opens a shooting interface and displays first prompt information, where the first prompt information may include a first shooting frame 201 and/or an image description and a text description 202 for representing the first specified action; and/or an animation 203 for representing the first specified action, such as: at this time, the text 202 of "please keep the head in the shooting frame, raise the left hand to the same level as the eyes, and make a blink action for the camera" and the first shooting frame 201 may be displayed on the terminal interface. In the actual application scenario, the content of the specific prompt information may be set according to the actual situation, so long as the user is guaranteed to be prompted to make a specific specified action, and all examples listed in the embodiments of the present disclosure are only for explaining the scheme, and do not play any limiting role on the scheme.
The displayed first photographing frame 201 may be a photographing frame with various shapes, which is set according to actual requirements, and only needs to ensure that a specified action image executed by an object to be identified can be collected, for example: assume that the first specified action is: the user blinks and lifts the left hand to make the fist making action, at this moment, the first shooting frame can be a circular area, a square area or an area with other shapes, and only the face and the left hand are required to be ensured to be placed in the first shooting frame, and of course, in order to clearly collect the face image, the lifted hand should not cover the face when prompting the user to make the appointed action. The first photographing frame may include an avatar photographing frame and a gesture photographing frame, and specifically, may be set according to actual conditions, which is not particularly limited in the embodiment of the present specification. When the object to be identified passes through the living body identification, a prompt message of 'passing through the living experience certificate' or other prompt information for prompting the user to enter the next stage of verification can be displayed.
After verifying that the object to be identified is a living body, further determining skin color of the living body to be identified is needed, firstly, acquiring a face image and a palm image of the living body to be identified, and at this time, the information display condition of the terminal interface can be described with reference to the following drawings:
Fig. 3 is a schematic diagram of a face image and palm image acquisition interface in an image-based living body recognition method according to an embodiment of the present disclosure.
The acquiring the first image data may further include:
and displaying second prompt information for prompting the living body to be identified to execute a second designated action, wherein the second prompt information is used for prompting the living body to be identified to face the human face to the camera and place the palm center of the hand to face to the camera.
The displaying the second prompting information for prompting the living body to be identified to execute the second designated action may specifically include:
displaying a second photographing frame for photographing the person image;
and/or a third photographing frame for photographing the palm;
and/or image instructions for representing the second specified action;
and/or an animation for representing the second specified action.
As shown in fig. 3, after determining that the object to be identified is a living body, displaying second prompt information on the terminal interface, where the second prompt information may be used to prompt the living body to be identified to face a person to the camera and place the palm center of the hand to face the camera, and specifically, the second prompt information may include a second photographing frame 301 for photographing a person image; and/or a third photographing frame 302 for photographing the palm; and/or image description, text description 303 for representing the second specified action; and/or animation 304 for representing the second specified action. The shapes of the second photographing frame 301 and the third photographing frame 302 may be set according to the actual situation, and may be set with reference to the photographing frames in the first presentation information. Of course, the second photographing frame 301 and the third photographing frame 302 are only used for explaining that face images and palm images need to be acquired, and in the practical application process, there may be multiple photographing frames, or only one photographing frame, and only the face images and the palm images need to be ensured to be acquired. The text description in the prompt message can be defined according to practical situations, for example: a written description may be displayed: please place the head and palm in the corresponding shooting frame, if there is only one shooting frame, the text can be displayed: please place the head and palm inside the shooting frame. Specifically, in order to collect palm image data, a living body to be identified can be prompted to lift hands beside a human face, and five fingers are opened and do not shield the human face, so that an image of coexistence of the human face and the palm is shot.
When acquiring the face image data and the palm image, the image of coexistence of the face and the back of the hand needs to be acquired, and when acquiring the image, the display interface information can be described with reference to the following drawings:
fig. 4 is a schematic diagram of a face image and a back image acquisition interface in an image-based living body recognition method according to an embodiment of the present disclosure.
The acquiring the second image data may further include:
displaying third prompt information for prompting the living body to be identified to execute a third designated action, wherein the third prompt information is used for prompting the living body to be identified to keep the first designated action unchanged, and placing the back of the hand of the other hand towards the camera. Specifically, displaying the second prompting information for prompting the living body to be identified to perform the third specified action may specifically include:
displaying a fourth photographing frame for photographing a person image;
and/or a fifth photographing frame for photographing the back of the hand;
and/or image descriptions, text descriptions for representing the third specified action;
and/or an animation for representing the second specified action.
As shown in fig. 4, when the face image and the back hand image are acquired. Before acquiring the face image data and the back of hand image data, third prompt information may be displayed on the terminal interface, where the third prompt information may be used to prompt the living body to be identified to face the face of the person to the camera and place the back of the hand to the camera, and specifically, the third prompt information may include a fourth shooting frame 401 for shooting a person image; and/or a fifth photographing frame 402 for photographing the back of the hand; and/or image description, text description 403 for representing the third specified action; and/or animation 404 for representing the third specified action. The shapes of the fourth photographing frame 401 and the fifth photographing frame 402 may be set according to the actual situation, and may be set with reference to the photographing frame in the first presentation information. Of course, the fourth photographing frame 401 and the fifth photographing frame 402 are only used for explaining that face images and back images need to be acquired, and in the practical application process, there may be multiple photographing frames, or only one photographing frame, and only the face images and back images need to be ensured to be acquired. The text description in the prompt message can be defined according to practical situations, for example: a written description may be displayed: please place the head and back of hand in the corresponding photographing frame. If there is only one shooting frame, a text description can be displayed: please place the head and back of hand inside the shooting frame. Specifically, in order to collect the back of hand image data, can prompt the living body to be identified to lift the hand beside the face, the five fingers are opened, the back of the hand faces the camera, and the face is not shielded, so that the coexistence of the face and the back of the hand is ensured to be shot.
By the method, the corresponding prompt information is displayed on the terminal interface to prompt the object to be identified to perform corresponding actions, so that the object to be identified is conveniently identified, and the efficiency of identifying the dark skin living body is improved.
In a specific application scenario, after identifying the dark skin living body, the identity of the user may be further verified, for example: the acquired face image information can be further compared with the face image information stored in the system in advance to determine whether the identity of the living body to be identified is correct, and if so, the authentication is passed. Correspondingly, prompt information indicating that the user continues to perform identity authentication operation can be displayed on the terminal interface, and prompt information indicating that authentication is passed is displayed on the display interface after authentication is passed.
It should be noted that, in a specific application scenario, the method in the foregoing embodiment may be implemented by taking a photo, or may be implemented by recording a video or taking a picture, and when determining whether the object to be identified is a living body, various manners may be adopted. When the object to be identified is instructed to make the appointed action, the time for collecting the image can be set, and when the appointed action is not collected within the appointed time, the corresponding prompt information can be displayed again, and the object to be identified is prompted to make the appointed action according to the prompt. When the palm image data of the living body to be identified is acquired, the palm of the living body to be identified can be prompted to face the camera, and the five fingers are opened. Of course, other specified actions may be performed as long as palm image data can be accurately acquired. When acquiring face image data, palm image data and back of hand image data, the placement position of the hand can be set according to actual situations, for example: can be placed beside the face without shielding the face, or can be placed above the head, and the embodiment of the present disclosure is not limited in this regard. In order to better describe the technical solution in the above embodiment, the following specific flow steps may be described:
Assuming that in the security scenario, the identity of the user needs to be authenticated, the following steps may be adopted to determine whether the user to be identified is a dark skin living body:
1) And prompting the object to be identified in the interactive UI to lift one hand to the same level as eyes and face the camera of the shooting screen to blink. A corresponding recognition algorithm is used to recognize whether the user performed a blink, and if so, step 2 below is entered. Otherwise, judging that the attack is non-living body attack.
It should be noted that this step may be a random action specified by the system, and the specified action in the authentication process may be an action that the user cannot know in advance. Such as: the specified actions may include one or more of blinking, flipping a palm, opening a mouth, shaking a head, and the like.
2) Prompting the object to be identified to lift the other hand beside the face (the hand does not cover the face and the five fingers are opened), and taking a photo of coexistence of the face and the palm.
3) Prompting the object to be identified to turn over the hand in the step 2) (the turned hand does not cover the face, and the five fingers are opened), and then shooting a photo of coexistence of the face and the back of the hand.
4) Using the pictures taken in step 2) and step 3), it is determined whether the living body is a deep skin color living body based on the following rule:
A) The palm segmentation algorithm is used for picking up the hand areas in the palm graph and the back graph, and the two contours are symmetrical;
b) A face segmentation algorithm is used for matting down the face areas in the palm graph and the back graph, and the outline and average chromaticity of the palm graph and the back graph are similar;
c) The average chromaticity of the back region in the back hand image is similar to the average chromaticity of the face;
d) The average chromaticity of the palm area in the palm figure cannot be close to the average chromaticity of the face.
If the rules are satisfied, judging that the object to be identified is a dark skin color living body.
Of course, it should be noted that the steps of 1) to 4) and the rules a) to D) are only used to help explain the specific scheme in the embodiment of the present disclosure, and the scheme is not limited in scope, and in the practical application scenario, only the characteristic that "the palm center and the skin color of the hand of the dark skin color crowd have great visual color difference, and the skin color of the back of the hand is similar to the skin color of the face" is required.
By the method in the embodiment, the skin color around the palm center and the back of the dark skin color crowd has the characteristic of large visual color difference, and the skin color of the back of the hand is similar to the skin color of the face. The model is trained without collecting positive and negative samples of the dark skin color living body, and the method of carrying out living body identification on the faces of the dark skin color crowd by adopting palm overturning, human face and human hand chromatic aberration detection and contour matching is adopted, so that the interception rate of the dark skin color crowd is reduced, and the identification accuracy of the dark skin color crowd is improved.
Based on the same thought, the embodiment of the specification also provides a device corresponding to the method. Fig. 5 is a schematic structural view of an image-based living body recognition apparatus corresponding to fig. 1 provided in the embodiment of the present specification. As shown in fig. 5, the apparatus may include:
a first image data acquisition module 502, configured to acquire first image data; the first image data comprises first face image data of a living body to be identified and first palm image data of the living body to be identified, wherein the first palm image data comprises image data of a palm center area of the living body to be identified;
a second image data obtaining module 504, configured to obtain second image data, where the second image data includes second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data includes image data of a back hand area of the living body to be identified;
a first skin tone similarity determining module 506, configured to determine a first skin tone similarity according to the first face image data and the first palm image data;
a second skin color similarity determining module 508, configured to determine a second skin color similarity according to the second face image data and the second palm image data;
The deep skin color living body determining module 510 is configured to determine that the living body to be identified is a deep skin color living body based on the first skin color similarity and the second skin color similarity.
Optionally, the apparatus may further include:
a continuous multi-frame image data acquisition module for acquiring continuous multi-frame image data;
and the living body judging module to be identified is used for judging whether the living body to be identified exists in the image corresponding to the multi-frame image data according to the multi-frame image data.
Optionally, the deep skin color living body determining module 510 may specifically include:
the judging unit is used for judging whether the first skin color similarity is smaller than a first threshold value and the second skin color similarity is larger than or equal to a second threshold value, so that a first judging result is obtained;
and the dark skin color living body determining unit is used for determining that the living body to be identified is a dark skin color living body when the first judging result shows that the first skin color similarity is smaller than a first threshold value and the second skin color similarity is larger than or equal to a second threshold value, and the second threshold value is larger than the first threshold value.
Optionally, the apparatus may further include:
a first contour similarity determining module, configured to determine a first contour similarity of a face region in the first image data and the second image data according to the first face image data and the second face data;
A second contour similarity determining module, configured to determine a second contour similarity of the palm area and the back hand area according to the first palm image data and the second palm image data;
the judging module is used for judging whether the first contour similarity is larger than or equal to a third threshold value and the second contour similarity is larger than or equal to a fourth threshold value, so as to obtain a second judging result;
and the determining module is used for determining that the living bodies to be identified corresponding to the first image data and the second image data are the same object when the second judging result shows that the first contour similarity is larger than or equal to a third threshold value and the second contour similarity is larger than or equal to a fourth threshold value.
Optionally, the apparatus may further include:
the face region segmentation module is used for segmenting the first image data and the second image data by adopting a face segmentation algorithm to respectively obtain face regions corresponding to the first image data and the second image data;
and the hand region segmentation module is used for segmenting the first image data and the second image data by adopting a hand segmentation algorithm to respectively obtain a palm region in the first image data and a back hand region in the second image data.
Optionally, the apparatus may further include:
the first prompt information display module is used for displaying first prompt information for prompting an object to be identified to execute a first appointed action, and the first appointed action is used for detecting whether the object to be identified is a living body or not.
Optionally, the first prompt information display module is specifically configured to:
displaying a first shooting frame;
and/or image descriptions, text descriptions for representing the first specified action;
and/or an animation for representing the first specified action.
Optionally, the apparatus may further include:
the second prompt information display module is used for displaying second prompt information for prompting the living body to be identified to execute a second designated action, and the second prompt information is used for prompting the living body to be identified to face the person to the camera and place the palm center of the hand to face the camera.
Optionally, the second prompt information display module may be specifically configured to:
displaying a second photographing frame for photographing the person image;
and/or a third photographing frame for photographing the palm;
and/or image instructions, text instructions for representing the second specified action;
and/or an animation for representing the second specified action.
Optionally, the apparatus may further include:
the third prompt information display module is used for displaying third prompt information for prompting the living body to be identified to execute a third appointed action, and the third prompt information is used for prompting the living body to be identified to keep the face of the person facing the camera and placing the back of the hand facing the camera.
By the device, the false interception rate of dark skin color crowds is reduced, and the recognition accuracy of the dark skin color crowds is improved; the time for collecting positive and negative samples of the training model is saved, and the efficiency of identifying deep skin color living bodies is improved.
Based on the same thought, the embodiment of the specification also provides equipment corresponding to the method. Fig. 6 is a schematic structural view of an image-based living body recognition apparatus corresponding to fig. 1 provided in the embodiment of the present specification. As shown in fig. 6, the apparatus 600 may include:
at least one processor 610; the method comprises the steps of,
a memory 630 communicatively coupled to the at least one processor; wherein,,
the memory 630 stores instructions 620 executable by the at least one processor 610, the instructions being executable by the at least one processor 610.
The instructions may enable the at least one processor 610 to:
Acquiring first image data; the first image data comprises first face image data of a living body to be identified and first palm image data of the living body to be identified, wherein the first palm image data comprises image data of a palm center area of the living body to be identified;
acquiring second image data, wherein the second image data comprises second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data comprises image data of a back hand area of the living body to be identified;
determining a first skin color similarity according to the first face image data and the first palm image data;
determining a second skin color similarity according to the second face image data and the second palm image data;
and determining that the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity.
Based on the same thought, the embodiment of the specification also provides a computer readable medium corresponding to the method. Computer readable instructions stored on a computer readable medium, the computer readable instructions being executable by a processor to perform a method of:
Acquiring first image data; the first image data comprises first face image data of a living body to be identified and first palm image data of the living body to be identified, wherein the first palm image data comprises image data of a palm center area of the living body to be identified;
acquiring second image data, wherein the second image data comprises second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data comprises image data of a back hand area of the living body to be identified;
determining a first skin color similarity according to the first face image data and the first palm image data;
determining a second skin color similarity according to the second face image data and the second palm image data;
and determining that the living body to be identified is a dark skin color living body based on the first skin color similarity and the second skin color similarity.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing one or more embodiments of the present description.
One skilled in the art will appreciate that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Moreover, one or more embodiments of the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
One or more embodiments of the present specification are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to one or more embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
One or more embodiments of the present specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the present description may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is illustrative of embodiments of the present disclosure and is not to be construed as limiting one or more embodiments of the present disclosure. Various modifications and alterations to one or more embodiments of this description will be apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of one or more embodiments of the present disclosure, are intended to be included within the scope of the claims of one or more embodiments of the present disclosure.

Claims (13)

1. An image-based living body recognition method, comprising:
acquiring first image data; the first image data comprises first face image data of a living body to be identified and first palm image data of the living body to be identified, wherein the first palm image data comprises image data of a palm center area of the living body to be identified;
Acquiring second image data, wherein the second image data comprises second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data comprises image data of a back hand area of the living body to be identified;
determining a first skin color similarity according to the first face image data and the first palm image data;
determining a second skin color similarity according to the second face image data and the second palm image data;
determining that the living body to be identified is a deep skin color living body based on the first skin color similarity and the second skin color similarity specifically comprises: judging whether the first skin color similarity is smaller than a first threshold value and the second skin color similarity is larger than or equal to a second threshold value, and obtaining a first judging result; and when the first judging result shows that the first skin color similarity is smaller than a first threshold value and the second skin color similarity is larger than or equal to a second threshold value, determining that the living body to be identified is a deep skin color living body, wherein the second threshold value is larger than the first threshold value.
2. The method of claim 1, further comprising, prior to said acquiring the first image data:
Acquiring continuous multi-frame image data;
judging whether a living body to be identified exists in an image corresponding to the multi-frame image data according to the multi-frame image data.
3. The method of claim 1, prior to the determining that the living organism to be identified is a deep skin tone living organism based on the first skin tone similarity and the second skin tone similarity, further comprising:
determining a first contour similarity of a face region in the first image data and the second image data according to the first face image data and the second face data;
determining a second contour similarity of the palm center region and the back hand region according to the first palm image data and the second palm image data;
judging whether the first contour similarity is greater than or equal to a third threshold value and the second contour similarity is greater than or equal to a fourth threshold value, and obtaining a second judgment result;
and when the second judging result shows that the first contour similarity is larger than or equal to a third threshold value and the second contour similarity is larger than or equal to a fourth threshold value, determining that the living bodies to be identified, corresponding to the first image data and the second image data, are the same object.
4. The method of claim 1, further comprising, prior to said determining a first skin tone similarity from said first face image data and said first palm image data:
dividing the first image data and the second image data by adopting a face segmentation algorithm to respectively obtain face areas corresponding to the first image data and the second image data;
and dividing the first image data and the second image data by adopting a hand segmentation algorithm to respectively obtain a palm area in the first image data and a back area in the second image data.
5. The method of claim 2, further comprising, prior to said acquiring successive multi-frame image data:
and displaying first prompt information for prompting the object to be identified to execute a first specified action, wherein the first specified action is used for detecting whether the object to be identified is a living body or not.
6. The method of claim 5, wherein the displaying the first prompt information for prompting the object to be identified to perform the first specified action specifically comprises:
displaying a first shooting frame;
and/or image descriptions, text descriptions for representing the first specified action;
And/or an animation for representing the first specified action.
7. The method of claim 1, further comprising, prior to the acquiring the first image data:
and displaying second prompt information for prompting the living body to be identified to execute a second designated action, wherein the second prompt information is used for prompting the living body to be identified to face the human face to the camera and place the palm center of the hand to face to the camera.
8. The method according to claim 7, wherein the displaying the second prompt information for prompting the living body to be identified to perform the second specified action comprises:
displaying a second photographing frame for photographing the person image;
and/or a third photographing frame for photographing the palm;
and/or image instructions, text instructions for representing the second specified action;
and/or an animation for representing the second specified action.
9. The method of claim 1, further comprising, prior to said acquiring the second image data:
and displaying third prompt information for prompting the living body to be identified to execute a third specified action, wherein the third prompt information is used for prompting the living body to be identified to keep the face of the person facing the camera and placing the back of the hand facing the camera.
10. An image-based living body recognition apparatus comprising:
The first image data acquisition module is used for acquiring first image data; the first image data comprises first face image data of a living body to be identified and first palm image data of the living body to be identified, wherein the first palm image data comprises image data of a palm center area of the living body to be identified;
a second image data acquisition module, configured to acquire second image data, where the second image data includes second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data includes image data of a back hand area of the living body to be identified;
the first skin color similarity determining module is used for determining first skin color similarity according to the first face image data and the first palm image data;
the second skin color similarity determining module is used for determining second skin color similarity according to the second face image data and the second palm image data;
the deep skin color living body determining module is used for determining that the living body to be identified is a deep skin color living body based on the first skin color similarity and the second skin color similarity;
the deep skin color living body determining module specifically comprises:
The first judging unit is used for judging whether the first skin color similarity is smaller than a first threshold value and the second skin color similarity is larger than or equal to a second threshold value, so that a first judging result is obtained;
and the dark skin color living body determining unit is used for determining that the living body to be identified is a dark skin color living body when the first judging result shows that the first skin color similarity is smaller than a first threshold value and the second skin color similarity is larger than or equal to a second threshold value, and the second threshold value is larger than the first threshold value.
11. The apparatus of claim 10, the apparatus further comprising:
a continuous multi-frame image data acquisition module for acquiring continuous multi-frame image data;
and the living body judging module to be identified is used for judging whether the living body to be identified exists in the image corresponding to the multi-frame image data according to the multi-frame image data.
12. An image-based living body recognition apparatus comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
Acquiring first image data; the first image data comprises first face image data of a living body to be identified and first palm image data of the living body to be identified, wherein the first palm image data comprises image data of a palm center area of the living body to be identified;
acquiring second image data, wherein the second image data comprises second face image data of the living body to be identified and second palm image data of the living body to be identified, and the second palm image data comprises image data of a back hand area of the living body to be identified;
determining a first skin color similarity according to the first face image data and the first palm image data;
determining a second skin color similarity according to the second face image data and the second palm image data;
determining that the living body to be identified is a deep skin color living body based on the first skin color similarity and the second skin color similarity specifically comprises: judging whether the first skin color similarity is smaller than a first threshold value and the second skin color similarity is larger than or equal to a second threshold value, and obtaining a first judging result; and when the first judging result shows that the first skin color similarity is smaller than a first threshold value and the second skin color similarity is larger than or equal to a second threshold value, determining that the living body to be identified is a deep skin color living body, wherein the second threshold value is larger than the first threshold value.
13. A computer readable medium having stored thereon computer readable instructions executable by a processor to implement the image-based living body identification method of any one of claims 1 to 9.
CN202010029901.9A 2020-01-13 2020-01-13 Living body identification method, device and equipment based on image Active CN111259757B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010029901.9A CN111259757B (en) 2020-01-13 2020-01-13 Living body identification method, device and equipment based on image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010029901.9A CN111259757B (en) 2020-01-13 2020-01-13 Living body identification method, device and equipment based on image

Publications (2)

Publication Number Publication Date
CN111259757A CN111259757A (en) 2020-06-09
CN111259757B true CN111259757B (en) 2023-06-20

Family

ID=70950438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010029901.9A Active CN111259757B (en) 2020-01-13 2020-01-13 Living body identification method, device and equipment based on image

Country Status (1)

Country Link
CN (1) CN111259757B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797735A (en) * 2020-06-22 2020-10-20 深圳壹账通智能科技有限公司 Face video recognition method, device, equipment and storage medium
CN113194323B (en) * 2021-04-27 2023-11-10 口碑(上海)信息技术有限公司 Information interaction method, multimedia information interaction method and device
CN113569691A (en) * 2021-07-19 2021-10-29 新疆爱华盈通信息技术有限公司 Human head detection model generation method and device, human head detection model and human head detection method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013131407A1 (en) * 2012-03-08 2013-09-12 无锡中科奥森科技有限公司 Double verification face anti-counterfeiting method and device
CN104765440A (en) * 2014-01-02 2015-07-08 株式会社理光 Hand detecting method and device
CN104951940A (en) * 2015-06-05 2015-09-30 西安理工大学 Mobile payment verification method based on palmprint recognition
CN107066983A (en) * 2017-04-20 2017-08-18 腾讯科技(上海)有限公司 A kind of auth method and device
CN107223258A (en) * 2017-03-31 2017-09-29 中控智慧科技股份有限公司 Image-pickup method and equipment
WO2017181769A1 (en) * 2016-04-21 2017-10-26 腾讯科技(深圳)有限公司 Facial recognition method, apparatus and system, device, and storage medium
CN108416338A (en) * 2018-04-28 2018-08-17 深圳信息职业技术学院 A kind of non-contact palm print identity authentication method
WO2019127262A1 (en) * 2017-12-28 2019-07-04 深圳前海达闼云端智能科技有限公司 Cloud end-based human face in vivo detection method, electronic device and program product
CN109976519A (en) * 2019-03-14 2019-07-05 浙江工业大学 A kind of interactive display unit and its interactive display method based on augmented reality

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805047B (en) * 2018-05-25 2021-06-25 北京旷视科技有限公司 Living body detection method and device, electronic equipment and computer readable medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013131407A1 (en) * 2012-03-08 2013-09-12 无锡中科奥森科技有限公司 Double verification face anti-counterfeiting method and device
CN104765440A (en) * 2014-01-02 2015-07-08 株式会社理光 Hand detecting method and device
CN104951940A (en) * 2015-06-05 2015-09-30 西安理工大学 Mobile payment verification method based on palmprint recognition
WO2017181769A1 (en) * 2016-04-21 2017-10-26 腾讯科技(深圳)有限公司 Facial recognition method, apparatus and system, device, and storage medium
CN107223258A (en) * 2017-03-31 2017-09-29 中控智慧科技股份有限公司 Image-pickup method and equipment
CN107066983A (en) * 2017-04-20 2017-08-18 腾讯科技(上海)有限公司 A kind of auth method and device
WO2019127262A1 (en) * 2017-12-28 2019-07-04 深圳前海达闼云端智能科技有限公司 Cloud end-based human face in vivo detection method, electronic device and program product
CN108416338A (en) * 2018-04-28 2018-08-17 深圳信息职业技术学院 A kind of non-contact palm print identity authentication method
CN109976519A (en) * 2019-03-14 2019-07-05 浙江工业大学 A kind of interactive display unit and its interactive display method based on augmented reality

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Noor Amjed ; Fatimah Khalid ; Rahmita Wirza O.K. Rahmat ; Hizmawati Bint Madzin.A Robust Geometric Skin Colour Face Detection Method under Unconstrained Environment of Smartphone Database.《Applied Mechanics and Materials》.2019,PP 31-37. *
Seokhoon KangByoungjo ChoiDonghw Jo."Faces detection method based on skin color modeling".《Journal of Systems Architecture》.2016,全文. *
Yuseok BanSang-Ki KimSangyoun Lee."Face detection based on skin color likelihood".《Pattern Recognition》.2014,全文. *
范文兵,朱连杰.一种基于肤色特征提取的手势检测识别方法.《现代电子技术》.2017,全文. *

Also Published As

Publication number Publication date
CN111259757A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
TWI687832B (en) Biometric system and computer-implemented method for biometrics
EP3338217B1 (en) Feature detection and masking in images based on color distributions
US9922238B2 (en) Apparatuses, systems, and methods for confirming identity
KR102465532B1 (en) Method for recognizing an object and apparatus thereof
CN106022209B (en) A kind of method and device of range estimation and processing based on Face datection
US9323979B2 (en) Face recognition performance using additional image features
CN111259757B (en) Living body identification method, device and equipment based on image
US10460164B2 (en) Information processing apparatus, information processing method, eyewear terminal, and authentication system
US20160019420A1 (en) Multispectral eye analysis for identity authentication
US20170091550A1 (en) Multispectral eye analysis for identity authentication
EP3362942B1 (en) Electronic devices with improved iris recognition and methods thereof
WO2016010724A1 (en) Multispectral eye analysis for identity authentication
CN103902958A (en) Method for face recognition
CN111353404B (en) Face recognition method, device and equipment
US20120320181A1 (en) Apparatus and method for security using authentication of face
CN108416291B (en) Face detection and recognition method, device and system
CN104063709B (en) Sight line detector and method, image capture apparatus and its control method
CN108197585A (en) Recognition algorithms and device
JP2020518879A (en) Detection system, detection device and method thereof
CN111860394A (en) Gesture estimation and gesture detection-based action living body recognition method
WO2005055143A1 (en) Person head top detection method, head top detection system, and head top detection program
CN110363111B (en) Face living body detection method, device and storage medium based on lens distortion principle
Ma et al. Multi-perspective dynamic features for cross-database face presentation attack detection
CN107368811B (en) LBP-based face feature extraction method under infrared and non-infrared illumination
Jang et al. Skin region segmentation using an image-adapted colour model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant