CN111723655A - Face image processing method, device, server, terminal, equipment and medium - Google Patents

Face image processing method, device, server, terminal, equipment and medium Download PDF

Info

Publication number
CN111723655A
CN111723655A CN202010398944.4A CN202010398944A CN111723655A CN 111723655 A CN111723655 A CN 111723655A CN 202010398944 A CN202010398944 A CN 202010398944A CN 111723655 A CN111723655 A CN 111723655A
Authority
CN
China
Prior art keywords
face image
registered
user
image
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010398944.4A
Other languages
Chinese (zh)
Other versions
CN111723655B (en
Inventor
张学军
史忠伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuba Co Ltd
Original Assignee
Wuba Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuba Co Ltd filed Critical Wuba Co Ltd
Priority to CN202010398944.4A priority Critical patent/CN111723655B/en
Publication of CN111723655A publication Critical patent/CN111723655A/en
Application granted granted Critical
Publication of CN111723655B publication Critical patent/CN111723655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/554Detecting local intrusion or implementing counter-measures involving event detection and direct action
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Bioethics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention provides a face image processing method, a face image processing device, a server, a terminal, equipment and a medium, wherein the method comprises the steps of obtaining a face image of a user to be registered; adding the face images of the users to be registered into a target face image cluster, wherein the similarity between the face images belonging to the target face image cluster is greater than a preset similarity threshold; and when the growth speed of the face image in the target face image cluster is greater than a preset growth speed, generating prompt information, wherein the prompt information is used for prompting the processing of the face image added into the target face image cluster within a preset time period. The invention can improve the sensitivity of face attack recognition, thereby improving the safety of face recognition.

Description

Face image processing method, device, server, terminal, equipment and medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method, an apparatus, a server, a terminal, a device, and a medium for processing a face image.
Background
At present, face images are increasingly applied in daily life and work, for example, face images are applied to face recognition to ensure safety and accuracy of mobile payment, security guarantee and information comparison.
With the increasingly wide application field of face recognition by using face images, a large number of face images are generated, and thus, the number of face images which can be used for comparison during face recognition can be increased by the large number of face images, but a series of problems are also generated. For example, the system is very vulnerable to bugs, making the current face recognition technology less capable of resisting spoofing attacks such as false faces. Moreover, the sensitivity of system identification attack is low, and the condition is difficult to identify in time when false face registration image attack occurs.
Therefore, how to improve the capability of the face recognition technology to resist spoofing attacks becomes a problem to be solved urgently in the face recognition technology.
Disclosure of Invention
In view of the foregoing problems, embodiments of the present invention provide a method, an apparatus, a server, a terminal, a device, and a medium for processing a face image, which are intended to solve the technical problem that the existing face recognition technology in the related art is not strong in capability of resisting spoofing attacks such as false faces.
In order to solve the technical problem, the invention adopts the following scheme:
in a first aspect, an embodiment of the present invention provides a face image processing method, where the method includes:
acquiring a face image of a user to be registered;
adding the face images of the users to be registered into a target face image cluster, wherein the similarity between the face images belonging to the target face image cluster is greater than a preset similarity threshold;
and when the growth speed of the face image in the target face image cluster is greater than a preset growth speed, generating prompt information, wherein the prompt information is used for prompting the processing of the face image added into the target face image cluster within a preset time period.
Optionally, adding the facial image of the user to be registered to the target facial image cluster includes:
when the similarity between the face image of the user to be registered and the target registered face image in each registered face image is determined to be larger than the preset similarity threshold value, adding the face image of the user to be registered to a target face image cluster to which the target registered face image belongs;
and when the similarity between the face image of the user to be registered and each registered face image is determined not to be greater than the preset similarity threshold, creating a target face image cluster corresponding to the user to be registered, and adding the face image to be registered into the target face image cluster.
Optionally, after obtaining the face image of the user to be registered, the method further includes:
performing living body detection on the face image of the user to be registered;
adding the face image of the user to be registered into a target face image cluster, wherein the step of adding the face image of the user to be registered into the target face image cluster comprises the following steps:
and when the facial image of the user to be registered passes the living body detection, adding the facial image of the user to be registered into the target facial image cluster.
Optionally, before obtaining the face image of the user to be registered, the method further includes:
responding to a registration application sent by the user to be registered, and sending an action instruction to the user to be registered;
the method for acquiring the face image of the user to be registered comprises the following steps:
acquiring a face image of the user to be registered responding to the action instruction;
the living body detection is carried out on the face image of the user to be registered, and the living body detection comprises the following steps:
determining whether the facial image of the user to be registered is matched with the action instruction;
and when the facial image of the user to be registered is determined to be matched with the action instruction, determining that the facial image of the user to be registered passes the living body detection.
Optionally, performing living body detection on the face image of the user to be registered includes:
performing living body and/or screen shooting detection on the face image of the user to be registered so as to determine whether the face image of the user to be registered is a living body image;
and when the face image of the user to be registered is determined to be a living body image, determining that the face image of the user to be registered passes the living body detection.
Optionally, performing living body detection on the face image of the user to be registered includes:
performing image tampering detection on the facial image of the user to be registered to determine whether the facial image of the user to be registered is a tampered image;
and when the facial image of the user to be registered is determined not to be a tampered image, determining that the facial image of the user to be registered passes the living body detection.
Optionally, when obtaining the facial image of the user to be registered in response to the action instruction, the method further includes:
acquiring a face video of the user to be registered responding to the action instruction;
the living body detection is carried out on the face image of the user to be registered, and the living body detection comprises the following steps:
extracting multiple frames of face images from the face video in sequence according to the time sequence, wherein each frame of face image in the multiple frames of face images comprises a key position point;
determining the matching degree of the position change of the key position points in the multi-frame face images according to the time sequence and the action instruction;
and when the matching degree is greater than the preset matching degree, determining that the face image of the user to be registered passes the living body detection.
Optionally, when sending an action instruction to the user to be registered in response to the registration application sent by the user to be registered, the method further includes:
controlling the terminal currently logged in by the user to be registered to flash the screen color;
obtaining the facial image of the user to be registered responding to the action instruction, comprising:
acquiring a face image which is shot by the user to be registered under the color of the flickering screen in response to the action instruction;
the living body detection is carried out on the face image of the user to be registered, and the living body detection comprises the following steps:
performing visible color light reflection spectrum analysis on the face image of the user to be registered to detect whether the face image of the user to be registered is a living body image;
and when the face image of the user to be registered is determined to be a living body image, determining that the face image of the user to be registered passes the living body detection.
Optionally, the method further comprises at least one of:
responding to the prompt information, and performing living body detection on the face image added to the target face image cluster within a preset time period;
responding to the prompt information, and sending the face image added to the target face image cluster in the preset time period to a preset terminal;
responding to the prompt information, and outputting alarm information, wherein the alarm information comprises user information corresponding to the target face cluster so as to prompt the user to be attacked.
In a second aspect, an embodiment of the present invention provides a face image processing apparatus, where the apparatus includes:
the face image acquisition module is used for acquiring a face image of a user to be registered;
the face image adding module is used for adding the face image of the user to be registered into a target face image cluster, wherein the similarity between the face images belonging to the target face image cluster is greater than a preset similarity threshold;
and the prompt information generation module is used for generating prompt information when the growth speed of the face images in the target face image cluster is greater than a preset growth speed, wherein the prompt information is used for prompting the processing of the face images added into the target face image cluster within a preset time period.
Optionally, the facial image adding module includes:
the first adding module is used for adding the face image of the user to be registered to a target face image cluster to which the target registered face image belongs when the similarity between the face image of the user to be registered and the target registered face image in each registered face image cluster is determined to be greater than the preset similarity threshold;
and the second adding module is used for creating a target face image cluster corresponding to the user to be registered and adding the face image to be registered into the target face image cluster when the similarity between the face image of the user to be registered and each registered face image is determined not to be greater than the preset similarity threshold.
Optionally, the apparatus further comprises:
the living body detection module is used for carrying out living body detection on the face image of the user to be registered;
the facial image adding module is specifically used for adding the facial image of the user to be registered into the target facial image cluster when the facial image of the user to be registered passes through the live body detection.
Optionally, the apparatus further comprises:
the action instruction sending module is used for responding to the registration application sent by the user to be registered and sending an action instruction to the user to be registered;
the face image obtaining module is specifically used for obtaining a face image of the user to be registered responding to the action instruction;
the living body detection module is specifically used for determining whether the face image of the user to be registered is matched with the action instruction; and when the facial image of the user to be registered is determined to be matched with the action instruction, determining that the facial image of the user to be registered passes the living body detection.
Optionally, the liveness detection module comprises:
the screen shooting detection unit is used for carrying out living body and/or screen shooting detection on the face image of the user to be registered so as to determine whether the face image of the user to be registered is a living body image;
and the first determining unit is used for determining that the face image of the user to be registered passes the living body detection when the face image of the user to be registered is determined to be the living body image.
Optionally, the liveness detection module comprises:
the image tampering detection unit is used for carrying out image tampering detection on the face image of the user to be registered so as to determine whether the face image of the user to be registered is a tampered image;
a second determination unit, configured to determine that the face image of the user to be registered passes the live body detection when it is determined that the face image of the user to be registered is not a tampered image.
Optionally, the face image obtaining module is specifically configured to obtain a face video of the user to be registered responding to the action instruction;
the living body detecting module includes:
the face image extraction unit is used for extracting multiple frames of face images from the face video in sequence according to the time sequence, wherein each frame of face image in the multiple frames of face images comprises a key position point;
the matching degree determining unit is used for determining the matching degree of the position change of the key position points in the multi-frame face images according to the time sequence and the action instruction;
and the third determining unit is used for determining that the face image of the user to be registered passes the living body detection when the matching degree is greater than the preset matching degree.
Optionally, the action instruction sending module includes:
the color light control unit is used for controlling the terminal which is currently logged in by the user to be registered to flash the screen color;
the face image obtaining module is specifically configured to obtain a face image which is shot by the user to be registered in response to the action instruction and in the color of the flickering screen;
the in vivo test includes:
the reflection spectrum analysis unit is used for performing reflection spectrum analysis of visible colored light on the face image of the user to be registered so as to detect whether the face image of the user to be registered is a living body image;
and the fourth determining unit is used for determining that the face image of the user to be registered passes the living body detection when the face image of the user to be registered is determined to be the living body image.
Optionally, the apparatus further comprises:
the first response module is used for responding to the prompt information and carrying out living body detection on the face images added into the target face image cluster within a preset time period;
the second response module is used for responding to the prompt information and sending the face images added to the target face image cluster in the preset time period to a preset terminal;
and the third response module is used for responding to the prompt information and outputting alarm information, wherein the alarm information comprises user information corresponding to the target face cluster so as to prompt that the user is attacked.
In a third aspect, the present invention provides a server, including a facial image processing apparatus and a database, where the database stores a plurality of facial image clusters, and each facial image cluster includes a plurality of facial images of a same user, and the facial image processing apparatus is configured to execute the facial image processing method according to the first aspect.
In a fourth aspect, the present invention provides a mobile terminal, including a face image processing apparatus, configured to execute the face image processing method according to the first aspect.
In a fifth aspect, the present invention provides an electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the face image processing method of the first aspect.
In a sixth aspect, the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the face image processing method of the first aspect.
Compared with the prior art, the invention at least has the following advantages:
in the embodiment of the invention, when the face image of the user to be registered is acquired, the face image of the user to be registered can be added into the target face image cluster, and when the growth speed of the face image in the target face image cluster reaches the preset growth speed, prompt information is generated to prompt the processing of the face image added into the target face image cluster within the preset time period. When the growth rate of the face images in the target face image cluster is greater than the preset growth rate, the abnormal growth of the face images in the target face image cluster is shown, and in practice, the target face image cluster may be subjected to deception attack.
Because whether prompt information is generated or not is determined according to the growth speed of the face images in the target face image cluster, the sensitivity of sensing abnormal face image attack can be improved, the face images can be recognized to be attacked in time, and the safety of face recognition is guaranteed. .
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without inventive labor.
Fig. 1 is an application environment diagram of a face image processing method in an embodiment of the present invention;
FIG. 2 is a flow chart of steps of a method for processing a face image according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating steps of a method for processing a face image according to another embodiment of the present invention
Fig. 4 is a flowchart of a step of performing living body detection on a face image of a user to be registered in an optional face image processing method in the embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a face image processing apparatus according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a server in an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a mobile terminal in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Based on the technical problems to be solved, the inventor proposes the face image processing method in the embodiment of the invention when designing a scheme, the face image processing method can monitor the growth speed of the face image in the face image cluster, can determine that the face image is subjected to spoofing attack when the growth speed is determined to be greater than the preset growth speed, and further generates prompt information, and the prompt information can be used for prompting to process the face image in the target face image cluster so as to improve the technical problem that the existing face recognition technology has low sensitivity in recognizing spoofing attack.
In the following, the living human face detection scheme of the present invention is clearly and completely described.
Referring to fig. 1, an application environment diagram of a face image processing method according to an embodiment of the present invention is shown, as shown in fig. 1, the face image processing method may be applied to a server 11 or a mobile terminal 12. As shown in 1.1 in fig. 1, when applied to the server 11, the server 11 may be in communication connection with a plurality of clients 13 to realize rapid recognition of a large number of facial images to be recognized uploaded by the plurality of clients, and may be further applied to business fields with huge demand on facial recognition, such as information entry, banking business transaction, and the like. When applied to the mobile terminal 12, the mobile terminal can be applied to business fields requiring a small number of face identifications, such as access control of office buildings or residential buildings, office work card punching management of companies, security audit and the like. Of course, the method can also be applied to the interaction scene of the server 11 and the mobile terminal 12.
The mobile terminal can be an intelligent device such as a mobile phone, a tablet computer and an all-in-one machine.
Referring to fig. 2, a flowchart illustrating steps of a face image processing method according to an embodiment of the present invention is shown. As shown in fig. 2, the method for processing a face image may specifically include the following steps:
step S21, a face image of the user to be registered is obtained.
The user to be registered may be a user who is registered on a service client that needs face recognition, and the face image may be a face image shot by the user to be registered in real time. Specifically, the facial image may be an image captured in real time according to a prompt of the client when the user registers, or a facial image acquired from a base storing facial images of users to be registered.
When the method is applied to the server, the server can obtain the face image from a client in communication connection with the server, or can obtain the face image from a database in which the face images of users to be registered are stored.
And step S22, adding the facial image of the user to be registered into the target facial image cluster.
And the similarity between the face images belonging to the target face image cluster is greater than a preset similarity threshold.
The target face image cluster can be one face image cluster in a plurality of registered face image clusters, each face image in each face image cluster can be a face image belonging to the same user, and the similarity between each face image in each face image cluster is greater than a preset similarity threshold. And the similarity between each face image in the target face image cluster and the face image of the user to be registered is greater than a preset similarity threshold. That is, the face image of the user to be registered may be added to a target face image cluster owned by a user who belongs to the same user as the user to be registered.
For example, taking the face image of the user w to be registered as the image B, the face image clusters of the existing 3 registered users are respectively the image cluster a1, the image cluster a2 and the image cluster A3. If the image cluster a1 includes 10 face images of the user w, the image cluster a2 includes 4 face images of the user h, and the image cluster A3 includes 8 face images of the user g, the face image B may be added to the face image cluster a 1.
And step S23, when the growth speed of the face images in the target face image cluster is greater than the preset growth speed, generating prompt information.
And the prompt information is used for prompting the processing of the face images added into the target face image cluster within a preset time period.
In order to improve the recognition capability of the method for recognizing a large number of similar pictures, improve the sensitivity of malicious attacks for recognizing a large number of similar pictures and ensure the safety of face recognition, the method can monitor the growth speed of the face images in the target face image cluster in real time.
The increase speed of the face image in the target face image cluster may refer to an average increase amount of the face image in the target face image cluster within a certain time period. For example, if the number of face images is monitored to increase by 48 images in 24 hours, the increase rate is 2. In practice, the preset increasing speed may be preset according to a requirement, and the embodiment of the present invention does not limit the preset increasing speed.
In specific implementation, when the monitored increase speed of the face image in the target face image cluster reaches the preset increase speed, the target face image cluster generates a large number of similar pictures within a certain time, and the target face image cluster is possibly attacked by the similar pictures, so that prompt information can be generated to prompt the target face image cluster to be processed.
In practice, when processing of the target face image cluster is prompted, processing of the face image added to the target face image cluster within a preset time period can be prompted, and the preset time period can be an abnormal growth time period in which the growth speed of the face image in the target face image cluster is greater than a preset growth speed, so that the face image under abnormal attack can be processed. Or, the preset time period may be a normal increase time period in which the increase rate of the face image in the target face image cluster is greater than or equal to the preset increase rate, so that the face image added to the target face image in the normal increase time period may be processed to ensure that the normal face image is not interfered.
Wherein, when the face image added to the target face image cluster within the preset time period is processed, if the processed face image is the image added to the target face image cluster within the abnormal growth time period, it indicates that a part of the face image which is abnormally attacked needs to be processed, in this case, the part of the face image which is abnormally attacked can be processed for anti-attack, for example, the part of the face image can be abnormally marked to avoid that the similar face image attacks the target face image cluster again,
if the processed face image is the face image added to the target face image cluster within the normal growth time period, the partial face image may be subjected to protective processing, for example, the partial face image may be subjected to normal marking to identify that the partial face image is a normal face image in the target face cluster.
Of course, in practice, the processing of the face image added to the target face image cluster within the preset time period may not be limited to the above processing, and in practice, in order to ensure the safety of face recognition, improve the safety protection of the face image in the target face image cluster when an attack is encountered, and also perform more strict face detection on the user corresponding to the target face image cluster, for example, live face detection, or verifying an iris, or verifying a fingerprint.
In the embodiment of the invention, the increase speed of the face image in the target face image cluster can be monitored, and when the increase speed of the face image in the target face image cluster is greater than the preset increase speed, prompt information is generated to prompt the processing of the face image added to the target face image cluster within the preset time period. Therefore, the identification sensitivity of whether the face image in the target face cluster is attacked or not is improved, the attack to the face image can be identified in time, the face image added into the target face image cluster in a preset time period is processed in time, and the problem of reduction of the face identification safety caused by the attack is avoided.
Referring to fig. 3, a flowchart illustrating steps of a face image processing method in another embodiment is shown, and as shown in fig. 3, the method may specifically include the following steps:
step S31: and acquiring a face image of the user to be registered.
The face image of the user to be registered may be a face image in a face video shot by the user during registration or a shot face picture.
In practice, in order to further improve the recognition sensitivity of the face image of the user to be registered, whether the face image of the user to be registered is a living body is recognized, and when the face image is the living body, subsequent face image adding operation is performed to improve the face reality degree of the face image in each face image cluster, so that all the face images in the face image cluster are face images from the living body. The following steps may be included after step S31:
step S32: and performing living body detection on the face image of the user to be registered.
In face recognition applications, live detection may refer to the detection of whether a verified user is a real person.
The living body detection is carried out on the face images of the users to be registered, so that the attack from a false face model can be avoided, the face images in the face image cluster are all from real faces, and the safety of face identification is improved. Specifically, when the facial image of the user to be registered passes through the living body detection, the process may go to step S33 to add the facial image of the user to be registered to the target facial image cluster. When the living body detection fails, the human face model is possibly attacked, alarm information can be output and displayed, and in practice, the alarm information can also be sent to a designated terminal to prompt a worker to process.
Step S33: and adding the face image of the user to be registered into a target face image cluster.
And the similarity between the face images belonging to the target face image cluster is greater than a preset similarity threshold.
When the face image of the user to be registered is detected through the living body, the similarity between the face image of the user to be registered and each registered face image can be determined, and the face image of the user to be registered is added to the corresponding target face image cluster according to the similarity between the face image of the user to be registered and each registered face image. The registered face images refer to face images that have already been registered, and each registered face image may belong to a face image cluster, for example, if a face image a has already been registered, the face image a may be considered to be a registered face image, and the face image a belongs to the face image cluster a 1.
In this embodiment, the similarity between the face image of the user to be registered and each registered face image may represent the degree of similarity between the face image of the user to be registered and the registered face image, which may be a value between 0 and 1. When the similarity approaches to 1, the more similar the two face images are represented, which may mean that the two face images are from the same person in practice. Conversely, the closer the similarity approaches 0, the greater the difference between the two.
In specific implementation, the euclidean distance or the cosine distance may be used to measure the similarity between two face images, that is, the euclidean distance or the cosine distance between the face image of the user to be registered and the registered face image is determined, and the determined euclidean distance or the cosine distance is the similarity between the face image of the user to be registered and the registered face image. The Euclidean distance is the distance between two characteristic points in two human face images directly calculated by adopting an Euclidean formula, and the cosine distance is the measurement for measuring the difference of the two human face images by using the cosine value of the included angle of two vectors in a vector space.
In specific implementation, when the similarity between the face image of the user to be registered and the target registered face image in each registered face image is determined to be greater than the preset similarity threshold, the face image of the user to be registered is added to the target face image cluster to which the target registered face image belongs.
In this embodiment, the similarity threshold may be set as required, and in practice, the larger the preset threshold is, the higher the standard for face recognition is, which is beneficial to resisting spoofing of counterfeit face images.
When the similarity between the face image of the user to be registered and each registered face image is determined, a plurality of similarities can be obtained, then the similarity larger than a preset similarity threshold can be screened out from the obtained similarities, and then a target registered face image corresponding to the similarity larger than the preset similarity threshold is obtained, so that the face image of the user to be registered and the target registered face image come from the same face, and further, the face image of the user to be registered can be added into a target face image cluster to which the target registered face image belongs.
When the similarity between the face image of the user to be registered and each registered face image is not greater than the preset similarity threshold, creating a target face image cluster corresponding to the user to be registered, and adding the face image to be registered into the target face image cluster.
In this case, it indicates that there is no face image similar to the face image of the user to be registered in the registered face image, and the face image of the user to be registered may be considered as a new face image of an unregistered user, and the user to be registered may be registered according to the face image of the user to be registered, and at the same time, a corresponding target face image cluster is created for the face image of the user to be registered, and the face image of the user to be registered is added to the target face image cluster.
When the technical scheme is adopted, because the new target facial image cluster can be created for the user to be registered when the user to be registered is not each registered user, the facial image cluster of each registered user can be dynamically updated.
Step S34: and when the growth speed of the face images in the target face image cluster is greater than a preset growth speed, generating prompt information.
And the prompt information is used for prompting the processing of the face images added into the target face image cluster within a preset time period.
This step is similar to the process of step S23, and reference is made to step S23, which is not described herein again.
In one embodiment, after the prompt information is generated, the face images added to the target face image cluster within a preset time period may be processed in response to the prompt information, where the preset time period may be an abnormal growth time period in which the growth rate of the face images in the target face image cluster is greater than a preset growth rate. Specifically, the processing may be at least one processing manner of the following step S35 to step S37.
Step S35: and responding to the prompt information, and carrying out living body detection on the face image added to the target face image cluster within a preset time period.
In practice, after the prompt information is generated, the face image added to the target face image cluster within the preset time period may be subjected to living body detection in response to the prompt information, and specifically, the living body detection may be performed by using an existing image living body detection technology.
Step S36: and responding to the prompt information, and sending the face image added to the target face image cluster in the preset time period to a preset terminal.
In this embodiment, offline living body detection may also be performed on the face image added to the target face image cluster, so as to ensure the account security of the target registered user. In specific implementation, manual review prompt information can be generated and sent to terminal equipment held by a management user, so that the management user can organize manual living body detection on the corresponding user in the target face image cluster conveniently.
Step S37: and responding to the prompt message, and outputting alarm information.
And the alarm information comprises user information corresponding to the target face cluster so as to prompt the user to be attacked.
In this embodiment, the alarm information may include user information corresponding to the target face image cluster, and the alarm information may be sent to a designated user, for example, a management user, so that the management user can further process the user information corresponding to the target face image cluster through the alarm information.
Of course, the output alarm information may also be sound and light alarm information, for example, a buzzer alarm is output, or a warning lamp is controlled to flash, so that the manager can timely perform offline processing on the abnormal condition.
In this embodiment, when performing live detection on the face image of the user to be registered, live detection in different embodiments may be performed according to an attack mode to be prevented. The specific description is as follows:
embodiment A: in this embodiment, in order to improve the capability of live body detection against attacks such as filter cards, screen shots, and human faces, live body detection may be performed using a face image in which a user to be registered makes an instruction action. Specifically, the living body detection of the face image of the user to be registered may include the following steps:
step S321: and responding to the registration application sent by the user to be registered, and sending an action instruction to the user to be registered.
The registration application may be a registration application sent by the user to be registered when performing registration operation, and the action instruction may be an instruction generated and sent at random, and the action instruction may be used to instruct the user to be registered to make a targeted action, such as blinking, raising hands, smiling, opening mouth, shaking head, and the like.
Step S322: and acquiring the face image of the user to be registered responding to the action instruction.
Responding to the action instruction, namely the action instructed by the action instruction of the user to be registered, wherein the obtained face image of the user to be registered is the image of the user to be registered in the action instructed by the action instruction. If the user to be registered makes a mouth opening action, the obtained face image of the user to be registered can be a mouth opening picture of the user to be registered.
Step S323, determining whether the face image of the user to be registered matches the action instruction.
In practice, the matching of the face image of the user to be registered with the action instruction may refer to matching of position shape features of key points in the face image with the action instruction. If the action instruction is an instruction for opening mouth, the characteristic of the mouth in the face image is in an opening shape, and the action instruction can be matched with the action instruction; if the mouth is characterized as closed, it may be determined to be out of match with the motion command, in which case it may be attacked by anti-filter cards, screenshots, and face masks.
Step S324, when the facial image of the user to be registered is determined to be matched with the action instruction, determining that the facial image of the user to be registered passes the living body detection.
In specific implementation, when the facial image of the user to be registered is matched with the action instruction, the facial image of the user to be registered can be determined to be from the operation of a real person, and the living body detection can be determined. When the face image of the user to be registered is not matched with the action instruction, the face image of the user to be registered is determined not to be operated by a real person, the face recognition of the user to be registered can be stopped, and meanwhile, alarm information can be generated to prompt the user to be attacked.
Embodiment B: in combination with embodiment a, that is, in the case of sending an action instruction to the user to be registered in response to the registration application sent by the user to be registered and obtaining a face image of the user to be registered in response to the action instruction, in order to improve the capability of live detection against the attack of a still image, the attack of multiple still images or videos recorded in advance is prevented. When the face image of the user to be registered responding to the action instruction is obtained, a face video can be obtained to perform living body detection on the face video, and the method specifically includes the following steps:
step S321', obtaining a face video of the user to be registered responding to the action instruction.
In particular, the action command may include a plurality of sequential actions to instruct the user to be registered to perform a plurality of corresponding actions in sequence. For example, the action command sequentially comprises: if the user needs to take head-down, blink and shake, the user needs to take head-down, blink and shake actions in sequence.
Accordingly, referring to fig. 4, a flowchart illustrating steps of performing live body detection on the face video is shown, and specifically, the steps may include the following steps:
step S322', extracting multiple frames of face images from the face video in sequence according to the time sequence, wherein each frame of face image in the multiple frames of face images comprises a key position point.
The face video may include a plurality of frames of images in chronological sequence, each frame of image may have respective timestamp information, and the timestamp may represent a time sequence of the frame of image in the face video. In the embodiment of the invention, multiple frames of face images arranged according to time sequence can be extracted from a face video in sequence, wherein each frame of face image has a key position point, the key position point can be a position point which is changed along with the action of a user on the face in the action process, and the number of the key position points can be one or more. For example, the key location point may be the nose, chin, cheek, or eyes.
And step S323', determining the matching degree of the position change of the key position points in the multi-frame face images according to the time sequence and the action instruction.
In practice, a face key point alignment algorithm may be adopted to determine the change of the key point position in each frame of face image, and determine the matching degree between the change of the key point position and the action instruction, where the matching degree may represent the matching degree between the change of the key point position and each action in the action instruction. In practice, the matching degree may take a value between 0 and 1, and the higher the matching degree is, the more the position of the token key point is changed corresponding to each action in the action command.
In practice, when there are a plurality of key position points, the matching degree of the change of each key position point in the face images of the plurality of frames and the motion command may be determined separately, and the average of the plurality of matching degrees may be determined as the final matching degree.
For example, the action command comprises the following steps: for example, when the nose is lowered, blinked, and shaken, and the key points are the nose and the upper eyelid, the position change of the nose in the multi-frame image can be respectively determined, and when the position change of the nose is determined to be matched with the lowering and shaking head in the motion command by time sequence to be 0.8, and the position change of the upper eyelid is determined to be matched with the blinking eye in the motion command by time sequence to be 0.9, the average value of the two matching degrees can be used as the final matching degree, for example, the finally determined matching degree is 0.85.
And step S324', when the matching degree is greater than a preset matching degree, determining that the face image of the user to be registered passes the living body detection.
The preset matching degree can be set according to actual needs, and when the matching degree is larger than the preset matching degree, the human face video is shot when a real person makes actions according to the action instruction in sequence, so that the human face video can be determined to pass through the living body detection.
Embodiment C: with reference to embodiment a, that is, in response to the registration application sent by the user to be registered, sending an action instruction to the user to be registered, and obtaining a face image of the user to be registered in response to the action instruction, in order to improve the capability of the face recognition technology to resist the attack to the screen image, in response to the registration application sent by the user to be registered, the action instruction is sent to the user to be registered, and at the same time, the terminal to which the user to be registered currently logs in may be controlled to flash the screen color.
The flashing of the screen color may refer to alternately changing the color of a window at a window for acquiring a face image on a display screen of the terminal, or alternately changing the color of a display interface on the display interface for acquiring the face image.
Correspondingly, a face image shot by the user to be registered in response to the action instruction and under the color of the flickering screen can be obtained.
In this embodiment, the face image of the user to be registered is a face image photographed in response to the action instruction and in a state where the screen color is flicked when the user to be registered is registered. Because the flashing screen color has the light signal when the screen color flashes, and the light signal can reach the real face in shooting and can be reflected by the real face, the finally obtained face image of the user to be registered carries the reflected light information reflected by the flashing screen color.
Correspondingly, when the face image of the user to be registered is subjected to living body detection, whether the face image of the user to be registered is a living body image or not can be detected by performing reflection spectrum analysis of visible colored light on the face image of the user to be registered. And when the face image of the user to be registered is determined to be the living body image, determining that the face image of the user to be registered passes the living body detection.
The reflection spectrum analysis in the present invention may mean: obtaining a spectral image corresponding to the face image of the user to be registered, namely, converting the face image of the user to be registered into a spectral image, then extracting a reflectivity curve characteristic reflected by real skin of the face in a waveband according to the wavebands of the flickering screen color, and carrying out spectrum living body detection on the spectral image according to the reflectivity curve characteristic.
In practice, because the reflectivity of different object surfaces to the same light is different, the reflectivity curve characteristics are different, and it can be distinguished through the reflectivity curve characteristics whether the screen color light is reflected by a real human face or other objects, such as a photo, a mobile phone screen photo, and the like.
In practice, when the real skin is judged through the reflectivity curve characteristics extracted from the spectral image, the face image of the user to be registered can be determined to be the living body image. When the skin is judged not to be real skin through the reflectivity curve characteristics extracted from the spectral image, the face image of the user to be registered can be determined not to be a living body image, the face recognition of the user to be registered can be stopped, and meanwhile, alarm information can be generated to prompt the user to be attacked.
When the embodiment is adopted, the living body detection of the face image can be realized by flashing the screen color and performing reflection spectrum analysis on the face image so as to identify whether the face image of the user to be registered is the face image of the real face or not, thereby preventing attacks such as screen photos and the like.
Embodiment D: in this embodiment, in order to further improve the recognition sensitivity of the face image of the user to be registered and improve the capability of the face recognition technology in resisting attacks such as card filtering, screen shooting and human skin attacks, when the face image of the user to be registered is subjected to living body detection, living body detection and/or screen shooting detection can be performed on the face image of the user to be registered so as to determine whether the face image of the user to be registered is a living body image. And when the face image of the user to be registered is determined to be the living body image, determining that the face image of the user to be registered passes the living body detection.
In specific implementation, the face image of the user to be registered can be input into the trained living body face recognition model to determine whether the face image is a living body image. Specifically, the living body face recognition model can be obtained by training a convolutional neural network model by using a large number of real face image samples, screen face image samples and human skin sample as training samples. The specific training process can be performed by referring to the existing machine learning model technology, and details are not repeated herein.
When the face image of the user to be registered is input into the living body face recognition model, whether the face image of the user to be registered is the face image of the real face is recognized through the living body face recognition model.
When the technical scheme is adopted, the living body face recognition model can be utilized to carry out the living body detection on the face image of the user to be registered, so that the living body detection efficiency and the recognition precision can be improved.
Embodiment E: in the embodiment, in order to further improve the recognition sensitivity of the face image of the user to be registered and improve the capability of the face recognition technology against the attack of image synthesis software, that is, the face recognition technology of the present invention can recognize a tampered face image, and when the face image of the user to be registered is subjected to living examination, image tampering detection can be performed on the face image of the user to be registered, so as to determine whether the face image of the user to be registered is a tampered image. And when the facial image of the user to be registered is determined not to be a tampered image, determining that the facial image of the user to be registered passes the living body detection.
In specific implementation, the face image of the user to be registered can be input into the trained image tampering model, so that whether the image is tampered or not can be identified through the image tampering model. Specifically, the image tampering model may be a model obtained by inputting a large number of original face image samples and synthesized face image samples into a neural network model and training the neural network model. The specific training process can be performed by referring to the existing machine learning model technology, and details are not repeated herein.
The image tampering model is obtained by training an original face image sample and a synthesized face image sample, so that the image tampering model can identify various synthesized face images, and can determine whether the face image of the user to be registered is a tampered image. If the image is not a tampered image, it can indicate that the obtained image is an image of a real face of the user to be registered.
When the technical scheme is adopted, the image tampering model can be utilized to carry out the living body detection on the face image of the user to be registered, so that the living body detection efficiency and the identification precision can be improved.
In this embodiment, after the face image of the user to be registered is obtained, the face image of the user to be registered is subjected to living body detection, after the living body detection is passed, the face image to be registered is added to the target face image cluster, the growth rate of the face image in the target face image cluster is monitored, and when the growth rate is greater than a preset growth rate, prompt information is generated to prompt that the face image added to the target face image cluster within a preset time period is processed. The face image of the user to be registered is subjected to living body detection and is added into the target face image cluster when the living body detection passes, so that the attack of non-real faces on the face image cluster is avoided, the face images in the face image cluster are all from the real faces, and the safety of face identification is improved.
And when the growth speed of the face images in the target face image cluster is higher than the preset growth speed, generating prompt information to prompt the processing of the face images added into the target face image cluster within a preset time period. Therefore, the identification sensitivity of whether the face image in the target face cluster is attacked or not is improved, the attack on the face image can be identified in time, and the problem of reduction of the face identification safety caused by the attack is solved.
Referring to fig. 5, a schematic structural diagram of a face image processing apparatus according to an embodiment of the present invention is shown, and as shown in fig. 5, the apparatus is applied to a mobile terminal or a server, and may specifically include the following modules:
a face image obtaining module 51, configured to obtain a face image of a user to be registered;
a face image adding module 52, configured to add a face image of the user to be registered to a target face image cluster, where a similarity between face images belonging to the target face image cluster is greater than a preset similarity threshold;
and a prompt information generating module 53, configured to generate prompt information when the growth rate of the face image in the target face image cluster is greater than a preset growth rate, where the prompt information is used to prompt processing of the face image added to the target face image cluster within a preset time period.
Optionally, the facial image adding module 52 includes:
the first adding module is used for adding the face image of the user to be registered to a target face image cluster to which the target registered face image belongs when the similarity between the face image of the user to be registered and the target registered face image in each registered face image is determined to be larger than the preset similarity threshold;
and the second adding module is used for creating a target face image cluster corresponding to the user to be registered and adding the face image to be registered into the target face image cluster when the similarity between the face image of the user to be registered and each registered face image is determined not to be greater than the preset similarity threshold.
Optionally, the apparatus may further specifically include the following modules:
the living body detection module can be used for carrying out living body detection on the face image of the user to be registered;
the facial image adding module is specifically used for adding the facial image of the user to be registered into the target facial image cluster when the facial image of the user to be registered passes through the live body detection.
Accordingly, in the case of including the living body detection module, optionally, the apparatus may further specifically include the following modules:
the action instruction sending module can be used for responding to the registration application sent by the user to be registered and sending an action instruction to the user to be registered;
the face image obtaining module is specifically used for obtaining a face image of the user to be registered responding to the action instruction;
the living body detection module may be specifically configured to determine whether the face image of the user to be registered matches the action instruction; and when the facial image of the user to be registered is determined to be matched with the action instruction, determining that the facial image of the user to be registered passes the living body detection.
Optionally, the living body detection module may specifically include the following units:
the screen shooting detection unit can be used for carrying out living body and/or screen shooting detection on the face image of the user to be registered so as to determine whether the face image of the user to be registered is a living body image;
the first determining unit may be configured to determine that the face image of the user to be registered passes the living body detection when the face image of the user to be registered is determined to be a living body image.
Optionally, the living body detection module may specifically include the following units:
the image tampering detection unit may be configured to perform image tampering detection on the facial image of the user to be registered, so as to determine whether the facial image of the user to be registered is a tampered image;
a second determination unit, configured to determine that the face image of the user to be registered passes the live body detection when it is determined that the face image of the user to be registered is not a tampered image.
Optionally, the facial image obtaining module 51 may be specifically configured to obtain a facial video of the user to be registered responding to the action instruction; accordingly, the in-vivo detection module may specifically include the following units:
the face image extraction unit can be used for extracting multiple frames of face images from the face video in sequence according to the time sequence, wherein each frame of face image in the multiple frames of face images comprises a key position point;
the matching degree determining unit may be configured to determine a matching degree between the position change of the key position point in the multiple frames of face images according to the time sequence and the action instruction;
the third determining unit may be configured to determine that the face image of the user to be registered passes the living body detection when the matching degree is greater than a preset matching degree.
Optionally, the action instruction sending module may specifically include the following units:
the color light control unit can be used for controlling the terminal which is currently logged in by the user to be registered to flash the screen color;
the face image obtaining module 51 may be specifically configured to obtain a face image that is shot by the user to be registered in response to the action instruction and in the color of the flashing screen;
accordingly, the in-vivo detection module may specifically include the following units:
the reflection spectrum analysis unit can be used for performing reflection spectrum analysis of visible colored light on the face image of the user to be registered so as to detect whether the face image of the user to be registered is a living body image;
the fourth determining unit may be configured to determine that the face image of the user to be registered passes the living body detection when the face image of the user to be registered is determined to be a living body image.
Optionally, the apparatus may further include the following modules:
the first response module is used for responding to the prompt information and carrying out living body detection on the face images added into the target face image cluster within a preset time period;
the second response module is used for responding to the prompt information and sending the face images added to the target face image cluster in the preset time period to a preset terminal;
and the third response module is used for responding to the prompt information and outputting alarm information, wherein the alarm information comprises user information corresponding to the target face cluster so as to prompt that the user is attacked.
For the embodiment of the face image processing device, because the embodiment is basically similar to the embodiment of the face image processing method, the description is relatively simple, and relevant points can be referred to the partial description of the embodiment of the face image processing method.
Referring to fig. 6, a schematic structural diagram of a server 600 according to an embodiment of the present invention is shown, where the server 600 may include the face image processing apparatus 61 and the database 62, and may further include a network interface 64 and a data interface 63. The database 62 may store facial image clusters of a plurality of registered users, each facial image cluster including a plurality of facial images of the same registered user, and the facial image processing apparatus 61 may be configured to execute the facial image processing method. Specifically, the face image processing device may be a device combining software and hardware, the hardware may include physical keys, the physical keys may be used for providing functions such as returning and confirming, and the software includes an application program; the facial image processing apparatus may be matched with the database 62 through software and hardware, so as to implement the facial image processing method described in the above embodiment.
Referring to fig. 7, a schematic structural diagram of a terminal device 700 according to an embodiment of the present invention is shown, and as shown in fig. 7, the terminal device 700 may include: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, a processor 710, a power supply 711, and the like. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 7 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 701 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 710; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 701 may also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 702, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 703 may convert audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into an audio signal and output as sound. Also, the audio output unit 703 may also provide audio output related to a specific function performed by the terminal device 700 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
The input unit 704 is used to receive audio or video signals. The input Unit 704 may include a Graphics Processing Unit (GPU) 7051 and a microphone 7042, and the Graphics processor 7051 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 706. The image frames processed by the graphic processor 7051 may be stored in the memory 709 (or other storage medium) or transmitted via the radio unit 701 or the network module 702. The microphone 7042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 701 in case of a phone call mode.
The terminal device 700 further comprises at least one sensor 705, such as light sensors, motion sensors and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the luminance of the display panel 7071 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 7071 and/or a backlight when the terminal device 700 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 705 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 706 is used to display information input by the user or information provided to the user. The Display unit 706 may include a Display panel 7071, and the Display panel 7071 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 7071 (e.g., operations by a user on or near the touch panel 7071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 7071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 710, receives a command from the processor 710, and executes the command. In addition, the touch panel 7071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 707 may include other input devices 7072 in addition to the touch panel 7071. In particular, the other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 7071 may be overlaid on the display panel 7071, and when the touch panel 7071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 710 to determine the type of the touch event, and then the processor 710 provides a corresponding visual output on the display panel 7071 according to the type of the touch event. Although the touch panel 7071 and the display panel 7071 are shown in fig. 6 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 7071 and the display panel 7071 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 708 is an interface for connecting an external device to the terminal apparatus 700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 708 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 700 or may be used to transmit data between the terminal apparatus 700 and the external device.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 709 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 710 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 709 and calling data stored in the memory 709, thereby monitoring the whole electronic device. Processor 710 may include one or more processing units; preferably, the processor 710 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The terminal device 700 may further include a power supply 711 (e.g., a battery) for supplying power to various components, and preferably, the power supply 711 may be logically connected to the processor 710 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 700 includes some functional modules that are not shown, and are not described in detail herein.
The embodiment of the present invention further provides a mobile terminal, which may be a smart phone, a notebook computer, a tablet computer or other portable intelligent devices, and the mobile terminal may include a face image processing device, where the face image processing device is configured to execute each process of the face image processing method embodiment, and can achieve the same technical effect, and is not described herein again to avoid repetition.
An embodiment of the present invention further provides an electronic device, including: the processor, the memory, and the computer program stored in the memory and capable of running on the processor, when executed by the processor, implement the processes of the above-mentioned embodiment of the face image processing method, and can achieve the same technical effects, and in order to avoid repetition, the details are not repeated here.
The embodiment of the invention also provides a computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program realizes each process of the embodiment of the face image processing method, and can achieve the same technical effect, and in order to avoid repetition, the description is not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (22)

1. A face image processing method is characterized by comprising the following steps:
acquiring a face image of a user to be registered;
adding the face images of the users to be registered into a target face image cluster, wherein the similarity between the face images belonging to the target face image cluster is greater than a preset similarity threshold;
and when the growth speed of the face image in the target face image cluster is greater than a preset growth speed, generating prompt information, wherein the prompt information is used for prompting the processing of the face image added into the target face image cluster within a preset time period.
2. The method of claim 1, wherein adding the facial image of the user to be registered to a target facial image cluster comprises:
when the similarity between the face image of the user to be registered and the target registered face image in each registered face image is determined to be larger than the preset similarity threshold value, adding the face image of the user to be registered to a target face image cluster to which the target registered face image belongs;
when the similarity between the face image of the user to be registered and each registered face image is determined not to be larger than the preset similarity threshold, creating a target face image cluster corresponding to the user to be registered, and adding the face image to be registered to the created target face image cluster.
3. The method according to claim 1, wherein after obtaining the face image of the user to be registered, the method further comprises:
performing living body detection on the face image of the user to be registered;
adding the face image of the user to be registered into a target face image cluster, wherein the step of adding the face image of the user to be registered into the target face image cluster comprises the following steps:
and when the facial image of the user to be registered passes the living body detection, adding the facial image of the user to be registered into the target facial image cluster.
4. The method according to claim 3, wherein before obtaining the face image of the user to be registered, the method further comprises:
responding to a registration application sent by the user to be registered, and sending an action instruction to the user to be registered;
the method for acquiring the face image of the user to be registered comprises the following steps:
acquiring a face image of the user to be registered responding to the action instruction;
the living body detection is carried out on the face image of the user to be registered, and the living body detection comprises the following steps:
determining whether the facial image of the user to be registered is matched with the action instruction;
and when the facial image of the user to be registered is determined to be matched with the action instruction, determining that the facial image of the user to be registered passes the living body detection.
5. The method according to claim 3, wherein the live body detection of the face image of the user to be registered comprises:
performing living body and/or screen shooting detection on the face image of the user to be registered so as to determine whether the face image of the user to be registered is a living body image;
and when the face image of the user to be registered is determined to be a living body image, determining that the face image of the user to be registered passes the living body detection.
6. The method according to claim 3, wherein the live body detection of the face image of the user to be registered comprises:
performing image tampering detection on the facial image of the user to be registered to determine whether the facial image of the user to be registered is a tampered image;
and when the facial image of the user to be registered is determined not to be a tampered image, determining that the facial image of the user to be registered passes the living body detection.
7. The method according to claim 4, wherein the method, while obtaining the facial image of the user to be registered in response to the action instruction, further comprises:
acquiring a face video of the user to be registered responding to the action instruction;
the living body detection is carried out on the face image of the user to be registered, and the living body detection comprises the following steps:
extracting multiple frames of face images from the face video in sequence according to the time sequence, wherein each frame of face image in the multiple frames of face images comprises a key position point;
determining the matching degree of the position change of the key position points in the multi-frame face images according to the time sequence and the action instruction;
and when the matching degree is greater than the preset matching degree, determining that the face image of the user to be registered passes the living body detection.
8. The method according to claim 4, while sending an action instruction to the user to be registered in response to the registration application sent by the user to be registered, further comprising:
controlling the terminal currently logged in by the user to be registered to flash the screen color;
obtaining the facial image of the user to be registered responding to the action instruction, comprising:
acquiring a face image which is shot by the user to be registered under the color of the flickering screen in response to the action instruction;
the living body detection is carried out on the face image of the user to be registered, and the living body detection comprises the following steps:
performing visible color light reflection spectrum analysis on the face image of the user to be registered to detect whether the face image of the user to be registered is a living body image;
and when the face image of the user to be registered is determined to be a living body image, determining that the face image of the user to be registered passes the living body detection.
9. The method of any one of claims 1-8, further comprising at least one of:
responding to the prompt information, and performing living body detection on the face image added to the target face image cluster within a preset time period;
responding to the prompt information, and sending the face image added to the target face image cluster in the preset time period to a preset terminal;
responding to the prompt information, and outputting alarm information, wherein the alarm information comprises user information corresponding to the target face cluster so as to prompt the user to be attacked.
10. A face image processing apparatus, characterized in that the apparatus comprises:
the face image acquisition module is used for acquiring a face image of a user to be registered;
the face image adding module is used for adding the face image of the user to be registered into a target face image cluster, wherein the similarity between the face images belonging to the target face image cluster is greater than a preset similarity threshold;
and the prompt information generation module is used for generating prompt information when the growth speed of the face images in the target face image cluster is greater than a preset growth speed, wherein the prompt information is used for prompting the processing of the face images added into the target face image cluster within a preset time period.
11. The apparatus of claim 10, wherein the face image adding module comprises:
the first adding module is used for adding the face image of the user to be registered to a target face image cluster to which the target registered face image belongs when the similarity between the face image of the user to be registered and the target registered face image in each registered face image is determined to be larger than the preset similarity threshold;
and the second adding module is used for creating a target face image cluster corresponding to the user to be registered and adding the face image to be registered into the target face image cluster when the similarity between the face image of the user to be registered and each registered face image is determined not to be greater than the preset similarity threshold.
12. The apparatus of claim 10, further comprising:
the living body detection module is used for carrying out living body detection on the face image of the user to be registered;
the facial image adding module is specifically used for adding the facial image of the user to be registered into the target facial image cluster when the facial image of the user to be registered passes through the live body detection.
13. The apparatus of claim 12, further comprising:
the action instruction sending module is used for responding to the registration application sent by the user to be registered and sending an action instruction to the user to be registered;
the face image obtaining module is specifically used for obtaining a face image of the user to be registered responding to the action instruction;
the living body detection module is specifically used for determining whether the face image of the user to be registered is matched with the action instruction; and when the facial image of the user to be registered is determined to be matched with the action instruction, determining that the facial image of the user to be registered passes the living body detection.
14. The apparatus of claim 12, wherein the liveness detection module comprises:
the screen shooting detection unit is used for carrying out living body and/or screen shooting detection on the face image of the user to be registered so as to determine whether the face image of the user to be registered is a living body image;
and the first determining unit is used for determining that the face image of the user to be registered passes the living body detection when the face image of the user to be registered is determined to be the living body image.
15. The apparatus of claim 12, wherein the liveness detection module comprises:
the image tampering detection unit is used for carrying out image tampering detection on the face image of the user to be registered so as to determine whether the face image of the user to be registered is a tampered image;
a second determination unit, configured to determine that the face image of the user to be registered passes the live body detection when it is determined that the face image of the user to be registered is not a tampered image.
16. The apparatus according to claim 13, wherein the facial image obtaining module is specifically configured to obtain a facial video of the user to be registered in response to the action instruction;
the living body detecting module includes:
the face image extraction unit is used for extracting multiple frames of face images from the face video in sequence according to the time sequence, wherein each frame of face image in the multiple frames of face images comprises a key position point;
the matching degree determining unit is used for determining the matching degree of the position change of the key position points in the multi-frame face images according to the time sequence and the action instruction;
and the third determining unit is used for determining that the face image of the user to be registered passes the living body detection when the matching degree is greater than the preset matching degree.
17. The apparatus of claim 13, wherein the action instruction sending module comprises:
the color light control unit is used for controlling the terminal which is currently logged in by the user to be registered to flash the screen color;
the face image obtaining module is specifically configured to obtain a face image which is shot by the user to be registered in response to the action instruction and in the color of the flickering screen;
the in-vivo detection module includes:
the reflection spectrum analysis unit is used for performing reflection spectrum analysis of visible colored light on the face image of the user to be registered so as to detect whether the face image of the user to be registered is a living body image;
and the fourth determining unit is used for determining that the face image of the user to be registered passes the living body detection when the face image of the user to be registered is determined to be the living body image.
18. The apparatus of claim 10, further comprising:
the first response module is used for responding to the prompt information and carrying out living body detection on the face images added into the target face image cluster within a preset time period;
the second response module is used for responding to the prompt information and sending the face images added to the target face image cluster in the preset time period to a preset terminal;
and the third response module is used for responding to the prompt information and outputting alarm information, wherein the alarm information comprises user information corresponding to the target face cluster so as to prompt that the user is attacked.
19. A server, characterized by comprising a facial image processing device and a database, wherein the database stores a plurality of facial image clusters, each facial image cluster comprises a plurality of facial images of the same user, and the facial image processing device is configured to execute the facial image processing method according to any one of claims 1 to 9.
20. A mobile terminal characterized by comprising a face image processing means for executing the face image processing method according to any one of claims 1 to 9.
21. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the method of processing a face image according to any one of claims 1 to 9.
22. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, implements the face image processing method according to any one of claims 1 to 9.
CN202010398944.4A 2020-05-12 2020-05-12 Face image processing method, device, server, terminal, equipment and medium Active CN111723655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010398944.4A CN111723655B (en) 2020-05-12 2020-05-12 Face image processing method, device, server, terminal, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010398944.4A CN111723655B (en) 2020-05-12 2020-05-12 Face image processing method, device, server, terminal, equipment and medium

Publications (2)

Publication Number Publication Date
CN111723655A true CN111723655A (en) 2020-09-29
CN111723655B CN111723655B (en) 2024-03-08

Family

ID=72564390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010398944.4A Active CN111723655B (en) 2020-05-12 2020-05-12 Face image processing method, device, server, terminal, equipment and medium

Country Status (1)

Country Link
CN (1) CN111723655B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723243A (en) * 2021-08-20 2021-11-30 南京华图信息技术有限公司 Thermal infrared image face recognition method for wearing mask and application

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8620315B1 (en) * 2006-09-29 2013-12-31 Yahoo! Inc. Multi-tiered anti-abuse registration for a mobile device user
CN105488495A (en) * 2016-01-05 2016-04-13 上海川织金融信息服务有限公司 Identity identification method and system based on combination of face characteristics and device fingerprint
CN105808988A (en) * 2014-12-31 2016-07-27 阿里巴巴集团控股有限公司 Method and device for identifying exceptional account
CN106339615A (en) * 2016-08-29 2017-01-18 北京红马传媒文化发展有限公司 Abnormal registration behavior recognition method, system and equipment
CN106657007A (en) * 2016-11-18 2017-05-10 北京红马传媒文化发展有限公司 Method for recognizing abnormal batch ticket booking behavior based on DBSCAN model
CN107066983A (en) * 2017-04-20 2017-08-18 腾讯科技(上海)有限公司 A kind of auth method and device
CN107835154A (en) * 2017-10-09 2018-03-23 武汉斗鱼网络科技有限公司 A kind of batch registration account recognition methods and system
CN108229120A (en) * 2017-09-07 2018-06-29 北京市商汤科技开发有限公司 Face unlock and its information registering method and device, equipment, program, medium
CN108446387A (en) * 2018-03-22 2018-08-24 百度在线网络技术(北京)有限公司 Method and apparatus for updating face registration library
CN108491813A (en) * 2018-03-29 2018-09-04 百度在线网络技术(北京)有限公司 Method and apparatus for fresh information
CN108629260A (en) * 2017-03-17 2018-10-09 北京旷视科技有限公司 Live body verification method and device and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8620315B1 (en) * 2006-09-29 2013-12-31 Yahoo! Inc. Multi-tiered anti-abuse registration for a mobile device user
CN105808988A (en) * 2014-12-31 2016-07-27 阿里巴巴集团控股有限公司 Method and device for identifying exceptional account
CN105488495A (en) * 2016-01-05 2016-04-13 上海川织金融信息服务有限公司 Identity identification method and system based on combination of face characteristics and device fingerprint
CN106339615A (en) * 2016-08-29 2017-01-18 北京红马传媒文化发展有限公司 Abnormal registration behavior recognition method, system and equipment
CN106657007A (en) * 2016-11-18 2017-05-10 北京红马传媒文化发展有限公司 Method for recognizing abnormal batch ticket booking behavior based on DBSCAN model
CN108629260A (en) * 2017-03-17 2018-10-09 北京旷视科技有限公司 Live body verification method and device and storage medium
CN107066983A (en) * 2017-04-20 2017-08-18 腾讯科技(上海)有限公司 A kind of auth method and device
WO2018192406A1 (en) * 2017-04-20 2018-10-25 腾讯科技(深圳)有限公司 Identity authentication method and apparatus, and storage medium
CN108229120A (en) * 2017-09-07 2018-06-29 北京市商汤科技开发有限公司 Face unlock and its information registering method and device, equipment, program, medium
CN107835154A (en) * 2017-10-09 2018-03-23 武汉斗鱼网络科技有限公司 A kind of batch registration account recognition methods and system
CN108446387A (en) * 2018-03-22 2018-08-24 百度在线网络技术(北京)有限公司 Method and apparatus for updating face registration library
CN108491813A (en) * 2018-03-29 2018-09-04 百度在线网络技术(北京)有限公司 Method and apparatus for fresh information

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JUAN DU ET AL.: "Method for detecting abnormal behaviour of users based on selective clustering ensemble", IET NETWORKS, 1 March 2018 (2018-03-01), pages 117 - 118 *
刘丽萍;黄晓娜;杨珊;潘家辉;: "多维度消费人群分析及产品推荐系统", 计算机系统应用, no. 03, 15 March 2020 (2020-03-15) *
孙霖;潘纲;: "人脸识别中视频回放假冒攻击的实时检测方法", 电路与系统学报, no. 02, 15 April 2010 (2010-04-15) *
宁海斌: "基于大数据安全分析的网络安全技术发展趋势研究", 网络信息安全, no. 290, 30 June 2016 (2016-06-30) *
陈振国 著: "物联网环境下信任模型及其应用研究", 北京:北京交通大学出版社:清华大学出版社, pages: 117 - 118 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723243A (en) * 2021-08-20 2021-11-30 南京华图信息技术有限公司 Thermal infrared image face recognition method for wearing mask and application
CN113723243B (en) * 2021-08-20 2024-05-17 南京华图信息技术有限公司 Face recognition method of thermal infrared image of wearing mask and application

Also Published As

Publication number Publication date
CN111723655B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN109381165B (en) Skin detection method and mobile terminal
WO2017181769A1 (en) Facial recognition method, apparatus and system, device, and storage medium
CN108712603B (en) Image processing method and mobile terminal
CN109492550B (en) Living body detection method, living body detection device and related system applying living body detection method
CN108875468B (en) Living body detection method, living body detection system, and storage medium
CN108345442B (en) A kind of operation recognition methods and mobile terminal
CN109525837B (en) Image generation method and mobile terminal
CN108012026B (en) Eyesight protection method and mobile terminal
CN110807405A (en) Detection method of candid camera device and electronic equipment
CN108549802A (en) A kind of unlocking method, device and mobile terminal based on recognition of face
CN108206892B (en) Method and device for protecting privacy of contact person, mobile terminal and storage medium
CN108366220A (en) A kind of video calling processing method and mobile terminal
CN108462826A (en) A kind of method and mobile terminal of auxiliary photo-taking
CN109544172B (en) Display method and terminal equipment
CN108038360B (en) Operation mode switching method and mobile terminal
CN108629280A (en) Face identification method and mobile terminal
CN109793491B (en) Terminal equipment for color blindness detection
CN107895108B (en) Operation management method and mobile terminal
CN108446665B (en) Face recognition method and mobile terminal
CN111723655B (en) Face image processing method, device, server, terminal, equipment and medium
CN107809515B (en) Display control method and mobile terminal
CN109639981A (en) A kind of image capturing method and mobile terminal
CN112818733B (en) Information processing method, device, storage medium and terminal
CN108960097B (en) Method and device for obtaining face depth information
CN110889692A (en) Mobile payment method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant