CN111723655B - Face image processing method, device, server, terminal, equipment and medium - Google Patents

Face image processing method, device, server, terminal, equipment and medium Download PDF

Info

Publication number
CN111723655B
CN111723655B CN202010398944.4A CN202010398944A CN111723655B CN 111723655 B CN111723655 B CN 111723655B CN 202010398944 A CN202010398944 A CN 202010398944A CN 111723655 B CN111723655 B CN 111723655B
Authority
CN
China
Prior art keywords
face image
registered
user
face
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010398944.4A
Other languages
Chinese (zh)
Other versions
CN111723655A (en
Inventor
张学军
史忠伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuba Co Ltd
Original Assignee
Wuba Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuba Co Ltd filed Critical Wuba Co Ltd
Priority to CN202010398944.4A priority Critical patent/CN111723655B/en
Publication of CN111723655A publication Critical patent/CN111723655A/en
Application granted granted Critical
Publication of CN111723655B publication Critical patent/CN111723655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/554Detecting local intrusion or implementing counter-measures involving event detection and direct action
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Abstract

The invention provides a face image processing method, a device, a server, a terminal, equipment and a medium, wherein the method comprises the steps of obtaining a face image of a user to be registered; adding the face images of the user to be registered into a target face image cluster, wherein the similarity among face images belonging to the target face image cluster is larger than a preset similarity threshold; and generating prompt information when the growth speed of the face images in the target face image cluster is greater than a preset growth speed, wherein the prompt information is used for prompting the processing of the face images added into the target face image cluster in a preset time period. The invention can improve the sensitivity of recognizing the face attack, thereby improving the security of face recognition.

Description

Face image processing method, device, server, terminal, equipment and medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a face image processing method, device, server, terminal, device, and medium.
Background
Currently, face images are increasingly applied in daily life and work, for example, face recognition is performed by using face images to ensure the safety and accuracy of mobile payment, security assurance and information comparison.
With the wider application field of face recognition by using face images, a large number of face images are generated, and the number of face images which can be used for comparison in face recognition can be increased, but a series of problems are also generated. For example, the system is very vulnerable, making current face recognition technologies weak against spoofing attacks such as face shavings. Moreover, the sensitivity of the system identification attack is low, and when a fake registered face image attack occurs, the situation is difficult to identify in time.
Therefore, how to improve the resistance of the face recognition technology to the spoofing attack becomes an urgent problem to be solved in the face recognition technology.
Disclosure of Invention
In view of the above problems, an embodiment of the present invention provides a face image processing method, device, server, terminal, device, and medium, which aim to solve the technical problem that the current face recognition technology in the related art has weak capability of resisting spoofing attacks such as a dummy face.
In order to solve the technical problems, the invention adopts the following scheme:
in a first aspect, an embodiment of the present invention provides a face image processing method, where the method includes:
Obtaining a face image of a user to be registered;
adding the face images of the user to be registered into a target face image cluster, wherein the similarity among face images belonging to the target face image cluster is larger than a preset similarity threshold;
and generating prompt information when the growth speed of the face images in the target face image cluster is greater than a preset growth speed, wherein the prompt information is used for prompting the processing of the face images added into the target face image cluster in a preset time period.
Optionally, adding the face image of the user to be registered to a target face image cluster includes:
when the similarity between the face image of the user to be registered and the target registered face image in all the registered face images is determined to be larger than the preset similarity threshold, adding the face image of the user to be registered into a target face image cluster to which the target registered face image belongs;
when the similarity between the face image of the user to be registered and each registered face image is not larger than the preset similarity threshold value, creating a target face image cluster corresponding to the user to be registered, and adding the face image to be registered into the target face image cluster.
Optionally, after obtaining the face image of the user to be registered, the method further comprises:
performing living body detection on the face image of the user to be registered;
adding the face image of the user to be registered to a target face image cluster comprises the following steps:
and when the face image of the user to be registered passes through living detection, adding the face image of the user to be registered into a target face image cluster.
Optionally, before obtaining the face image of the user to be registered, the method further includes:
responding to a registration application sent by the user to be registered, and sending an action instruction to the user to be registered;
obtaining a face image of a user to be registered, including:
obtaining a face image of the user to be registered in response to the action instruction;
performing living body detection on the face image of the user to be registered, including:
determining whether the face image of the user to be registered is matched with the action instruction;
and when the face image of the user to be registered is matched with the action instruction, determining that the face image of the user to be registered passes through the living body detection.
Optionally, performing living body detection on the face image of the user to be registered includes:
Performing living body and/or screen capturing detection on the face image of the user to be registered to determine whether the face image of the user to be registered is a living body image or not;
and when the face image of the user to be registered is determined to be a living body image, determining that the face image of the user to be registered passes through the living body detection.
Optionally, performing living body detection on the face image of the user to be registered includes:
performing image tampering detection on the face image of the user to be registered to determine whether the face image of the user to be registered is a tampered image;
and when the face image of the user to be registered is not a tampered image, determining that the face image of the user to be registered passes through the living body detection.
Optionally, when obtaining the face image of the user to be registered in response to the action instruction, the method further includes:
acquiring a face video of the user to be registered in response to the action instruction;
performing living body detection on the face image of the user to be registered, including:
sequentially extracting a plurality of frames of face images from the face video according to time sequence, wherein each frame of face image in the plurality of frames of face images comprises key position points;
Determining the matching degree of the position change of the key position points in the multi-frame face image according to the time sequence and the action instruction;
and when the matching degree is larger than a preset matching degree, determining that the face image of the user to be registered passes through the living body detection.
Optionally, in response to the registration application sent by the user to be registered, sending an action instruction to the user to be registered, and meanwhile, further including:
controlling the flashing screen color of the terminal which is currently logged in by the user to be registered;
obtaining a face image of the user to be registered in response to the action instruction, including:
obtaining a face image shot by the user to be registered in response to the action instruction under the flashing screen color;
performing living body detection on the face image of the user to be registered, including:
carrying out reflection spectrum analysis of visible color light on the face image of the user to be registered so as to detect whether the face image of the user to be registered is a living body image or not;
and when the face image of the user to be registered is determined to be a living body image, determining that the face image of the user to be registered passes through the living body detection.
Optionally, the method further comprises at least one of:
Responding to the prompt information, and performing living detection on face images added to the target face image cluster in a preset time period;
responding to the prompt information, and sending the face images added to the target face image cluster in the preset time period to a preset terminal;
and responding to the prompt information, and outputting alarm information, wherein the alarm information comprises user information corresponding to the target face cluster so as to prompt the user to be attacked.
In a second aspect, an embodiment of the present invention provides a face image processing apparatus, including:
the face image acquisition module is used for acquiring a face image of a user to be registered;
the face image adding module is used for adding the face images of the user to be registered into a target face image cluster, wherein the similarity among the face images belonging to the target face image cluster is larger than a preset similarity threshold;
the prompt information generation module is used for generating prompt information when the growth speed of the face images in the target face image cluster is greater than the preset growth speed, and the prompt information is used for prompting the processing of the face images added into the target face image cluster in a preset time period.
Optionally, the face image adding module includes:
the first adding module is used for adding the face image of the user to be registered into a target face image cluster to which the target registered face image belongs when the similarity between the face image of the user to be registered and the target registered face image in each registered face image cluster is determined to be larger than the preset similarity threshold;
and the second adding module is used for creating a target face image cluster corresponding to the user to be registered and adding the face image to be registered into the target face image cluster when the similarity between the face image of the user to be registered and each registered face image is not larger than the preset similarity threshold value.
Optionally, the apparatus further comprises:
the living body detection module is used for carrying out living body detection on the face image of the user to be registered;
the face image adding module is specifically configured to add the face image of the user to be registered to a target face image cluster when the face image of the user to be registered passes through living detection.
Optionally, the apparatus further comprises:
the action instruction sending module is used for responding to a registration application sent by the user to be registered and sending an action instruction to the user to be registered;
The face image obtaining module is specifically configured to obtain a face image of the user to be registered in response to the action instruction;
the living body detection module is specifically configured to determine whether the face image of the user to be registered is matched with the action instruction; and when the face image of the user to be registered is determined to be matched with the action instruction, determining that the face image of the user to be registered passes through the living body detection.
Optionally, the living body detection module includes:
the screen shooting detection unit is used for performing living body and/or screen shooting detection on the face image of the user to be registered so as to determine whether the face image of the user to be registered is a living body image or not;
and the first determining unit is used for determining that the face image of the user to be registered passes through the living body detection when the face image of the user to be registered is determined to be the living body image.
Optionally, the living body detection module includes:
the image tampering detection unit is used for performing image tampering detection on the face image of the user to be registered so as to determine whether the face image of the user to be registered is a tampered image or not;
and a second determining unit configured to determine that the face image of the user to be registered passes through the living body detection when it is determined that the face image of the user to be registered is not a tampered image.
Optionally, the face image obtaining module is specifically configured to obtain a face video of the user to be registered in response to the action instruction;
the living body detection module includes:
the face image extraction unit is used for sequentially extracting a plurality of frames of face images from the face video according to time sequence, wherein each frame of face image in the plurality of frames of face images comprises key position points;
the matching degree determining unit is used for determining the matching degree of the position change of the key position point in the multi-frame face image according to the time sequence and the action instruction;
and a third determining unit, configured to determine that the face image of the user to be registered passes through the living body detection when the matching degree is greater than a preset matching degree.
Optionally, the action instruction sending module includes:
the color light control unit is used for controlling the flashing screen color of the terminal which is currently logged in by the user to be registered;
the face image obtaining module is specifically configured to obtain a face image that is shot by the user to be registered in response to the action instruction and under the color of the flickering screen;
the living body detection includes:
the reflection spectrum analysis unit is used for carrying out reflection spectrum analysis of visible color light on the face image of the user to be registered so as to detect whether the face image of the user to be registered is a living body image or not;
And a fourth determining unit, configured to determine that the face image of the user to be registered is detected by the living body when determining that the face image of the user to be registered is a living body image.
Optionally, the apparatus further comprises:
the first response module is used for responding to the prompt information and performing living detection on the face images added to the target face image cluster in a preset time period;
the second response module is used for responding to the prompt information and sending the face images added to the target face image cluster in the preset time period to a preset terminal;
and the third response module is used for responding to the prompt information and outputting alarm information, wherein the alarm information comprises user information corresponding to the target face cluster so as to prompt the user to be attacked.
In a third aspect, the present invention provides a server, including a face image processing apparatus and a database, where the database stores a plurality of face image clusters, each face image cluster includes a plurality of face images of a same user, and the face image processing apparatus is configured to execute the face image processing method of the first aspect.
In a fourth aspect, the present invention provides a mobile terminal comprising a face image processing apparatus for performing the face image processing method according to the first aspect.
In a fifth aspect, the present invention provides an electronic device, comprising: the facial image processing system comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the computer program realizes the facial image processing method of the first aspect when the computer program is executed by the processor.
In a sixth aspect, the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the face image processing method of the first aspect.
Compared with the prior art, the invention has at least the following advantages:
in the embodiment of the invention, when the face image of the user to be registered is acquired, the face image of the user to be registered can be added into the target face image cluster, and when the growth speed of the face image in the target face image cluster reaches the preset growth speed, prompt information is generated to prompt the processing of the face image added into the target face image cluster in the preset time period. When the growth speed of the face images in the target face image cluster is larger than the preset growth speed, the abnormal growth of the face images in the target face image cluster is indicated, and the face images possibly originate from the target face image cluster being deception attack in practice.
Because the judgment whether the prompt information is generated is determined according to the growth speed of the face images in the target face image cluster, the sensitivity of sensing the attack of the abnormal face images can be improved, the face images can be timely identified to be attacked, and the safety of face recognition is ensured. .
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is an application environment diagram of a face image processing method in an embodiment of the present invention;
FIG. 2 is a flow chart of steps of a face image processing method in an embodiment of the invention;
FIG. 3 is a flow chart illustrating steps of a face image processing method according to another embodiment of the present invention
FIG. 4 is a flowchart showing steps for performing in-vivo detection on a face image of a user to be registered in an alternative face image processing method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a face image processing apparatus in an embodiment of the present invention;
FIG. 6 is a schematic diagram of a server according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Based on the technical problems to be solved, the inventor proposes the face image processing method according to the embodiment of the invention when designing the scheme, the face image processing method can monitor the growth speed of the face images in the face image cluster, and when the growth speed is determined to be greater than the preset growth speed, the face image processing method can determine that the face image is subject to deception attack, so as to generate prompt information, and the prompt information can be used for prompting the face images in the target face image cluster to process, so as to improve the technical problem that the existing face recognition technology has lower sensitivity to recognizing deception attack.
Next, the face biopsy scheme of the present invention will be clearly and completely described.
Referring to fig. 1, an application environment diagram of a face image processing method according to an embodiment of the present invention is shown, and as shown in fig. 1, the face image processing method may be applied to a server 11 or a mobile terminal 12. As shown in fig. 1.1, when applied to the server 11, the server 11 may be in communication connection with the plurality of clients 13, so as to implement rapid recognition of a large number of face images to be recognized uploaded by the plurality of clients, and thus may be suitable for a business field requiring a large number of face recognition, such as information input, and handling of banking business. When applied to the mobile terminal 12, the method can be applied to the business field with small face recognition quantity requirements, such as entrance guard of an office building or a residential building, on-duty card punching management of a company, security inspection and the like. Of course, the method can also be applied to the interaction scene of the server 11 and the mobile terminal 12.
The mobile terminal can be intelligent equipment such as a mobile phone, a tablet personal computer and an integrated machine.
Referring to fig. 2, a flowchart of steps of a face image processing method according to an embodiment of the present invention is shown. As shown in fig. 2, the face image processing method specifically includes the following steps:
Step S21, a face image of a user to be registered is obtained.
The user to be registered may be a user registered on a service client that needs face recognition, and the face image may be a face image photographed by the user to be registered in real time. Specifically, the face image may be an image photographed in real time according to a prompt of the client when the user is registered, or a face image obtained from a base storing face images of the user to be registered.
When the method is applied to a terminal, the terminal can acquire the image from a picture library shot by a camera configured by the terminal, and when the method is applied to a server, the server can acquire a face image from a client which is in communication connection with the server, or can acquire the face image from a database which stores the face image of a user to be registered.
Step S22, adding the face image of the user to be registered into a target face image cluster.
The similarity among face images belonging to the target face image cluster is larger than a preset similarity threshold.
The target face image cluster may be one face image cluster in a plurality of registered face image clusters, each face image in each face image cluster may be a face image belonging to the same user, and the similarity between each face image in each face image cluster is greater than a preset similarity threshold. The similarity between each face image in the target face image cluster and the face image of the user to be registered is larger than a preset similarity threshold. That is, the face image of the user to be registered may be added to the target face image cluster owned by the user belonging to the same user as the user to be registered.
For example, taking a face image of a user w to be registered as an image B, face image clusters of 3 registered users are currently an image cluster A1, an image cluster A2, and an image cluster A3. The image cluster A1 includes 10 face images of the user w, the image cluster A2 includes 4 face images of the user h, and the image cluster A3 includes 8 face images of the user g, the face image B may be added to the face image cluster A1.
Step S23, when the growth speed of the face images in the target face image cluster is larger than a preset growth speed, a prompt message is generated.
The prompt information is used for prompting the processing of the face images added to the target face image cluster in a preset time period.
In order to improve the recognition capability of the invention on a large number of similar pictures, improve the sensitivity of malicious attack for recognizing a large number of similar pictures, ensure the safety of face recognition, and in the embodiment, the growth speed of face images in a target face image cluster can be monitored in real time.
The growth rate of the face images in the target face image cluster may refer to an average growth amount of the face images in the target face image cluster in a certain period of time. For example, if the number of face images is detected to be increased by 48 in 24 hours, the rate of increase is 2. In practice, the preset growth speed may be preset according to the requirement, and the embodiment of the present invention does not limit the preset growth speed.
In the implementation, when the growth speed of face images in the target face image cluster reaches the preset growth speed, the target face image cluster generates a large number of similar pictures in a certain time and possibly encounters attack of the similar pictures, prompt information can be generated to prompt the target face image cluster to be processed.
In practice, when the target face image cluster is prompted to be processed, the face images added to the target face image cluster in a preset time period may be prompted to be processed, where the preset time period may be an abnormal growth time period in which the growth speed of the face images in the target face image cluster is greater than the preset growth speed, so that the face images under abnormal attack may be processed. Or, the preset time period may refer to a normal growth time period in which the growth speed of the face image in the target face image cluster is greater than the growth speed beyond the preset growth speed, so that the face image added to the target face image in the normal growth time period may be processed to ensure that the normal face image is not interfered.
Wherein when processing the face image added to the target face image cluster in the preset time period, if the processed face image is the image added to the target face image cluster in the abnormal growth time period, the processing of the part of the face image which is required to be processed for abnormal attack is indicated, in this case, the anti-attack processing can be performed on the part of the face image which is required to be processed for abnormal attack, for example, the part of the face image can be marked abnormally to avoid that similar face images attack the target face image cluster again,
If the processed face image is a face image added to the target face image cluster in the normal growth period, the partial face image may be subjected to protective processing, for example, the partial face image may be marked normally to identify that the partial face image is a normal face image in the target face cluster.
Of course, in practice, the processing of the face images added to the target face image cluster within the preset period of time may not be limited to the above processing, in practice, in order to ensure the security of face recognition, improve the security protection of the face images in the target face image cluster when an attack is encountered, and also may perform stricter face detection, for example, live face detection, or verification of iris, or verification of fingerprint, on the user corresponding to the target face image cluster.
In the embodiment of the invention, the growth speed of the face images in the target face image cluster can be monitored, and when the growth speed of the face images in the target face image cluster is greater than the preset growth speed, the prompt information is generated so as to prompt the processing of the face images added into the target face image cluster in the preset time period. Therefore, the recognition sensitivity of whether the face image in the target face cluster encounters an attack is improved, the attack on the face image can be recognized in time, and the face image added into the target face image cluster in a preset time period is processed in time, so that the problem of reduced face recognition safety caused by the attack is avoided.
Referring to fig. 3, a flowchart illustrating steps of a face image processing method in yet another embodiment, as shown in fig. 3, may specifically include the following steps:
step S31: and obtaining a face image of the user to be registered.
The face image of the user to be registered may be a face image in a face video shot by the user during registration, or may be a shot face image.
In practice, in order to further improve the recognition sensitivity of the face image of the user to be registered, whether the face image of the user to be registered is a living body is recognized, and when the face image is a living body, a subsequent face image adding operation is performed to improve the face authenticity of the face image in each face image cluster, so that all the face images in the face image cluster are face images from the living body. The following steps may be included after step S31:
step S32: and performing living body detection on the face image of the user to be registered.
Living detection refers to a method of determining the true physiological characteristics of an object to be authenticated in some authentication scenarios, in face recognition applications living detection may refer to detection of verifying whether a user is a true person.
The living body detection of the face images of the users to be registered can avoid the attack from the fake face model, so that the face images in the face image clusters are all from real faces, and the safety of face recognition is improved. Specifically, when the face image of the user to be registered passes through the living body detection, step S33 may be shifted to add the face image of the user to be registered to the target face image cluster. When the living body detection fails, the alarm information can be output and displayed when the living body detection shows that the living body detection is possibly attacked by a fake face model, and in practice, the alarm information can be sent to a designated terminal to prompt staff to process.
Step S33: and adding the face image of the user to be registered into a target face image cluster.
The similarity among face images belonging to the target face image cluster is larger than a preset similarity threshold.
When the face image of the user to be registered passes through the living body detection, the similarity between the face image of the user to be registered and each registered face image can be determined, and the face image of the user to be registered is added into a corresponding target face image cluster according to the similarity between the face image of the user to be registered and each registered face image. The registered face images refer to face images that have been registered, and each registered face image may belong to a face image cluster, for example, if a face image a has been registered, the face image a may be considered as a registered face image, and the face image a belongs to the face image cluster A1.
In this embodiment, the similarity between the face image of the user to be registered and each registered face image may represent the similarity between the face image of the user to be registered and the registered face image, which may be a value between 0 and 1. When the similarity approaches 1, the more similar the two face images are represented, in practice, it may be that the two face images are from the same person. Conversely, the closer the similarity approaches 0, the greater the difference between the two.
In the implementation, the Euclidean distance or cosine distance can be used for measuring the similarity of two face images, namely, the Euclidean distance or cosine distance between the face image of the user to be registered and the registered face image is determined, and the determined Euclidean distance or cosine distance is the similarity between the face image of the user to be registered and the registered face image. The Euclidean distance is the distance between two characteristic points in two face images directly calculated by adopting an Euclidean formula, and the cosine distance is the measurement for measuring the difference of the two face images by taking the cosine value of the included angle of two vectors in the vector space.
When the similarity between the face image of the user to be registered and the target registered face image in all the registered face images is determined to be larger than the preset similarity threshold, the face image of the user to be registered is added into a target face image cluster to which the target registered face image belongs.
In this embodiment, the similarity threshold may be set according to the requirement, and in practice, the larger the preset threshold is, the higher the standard for face recognition is, which is beneficial to resisting spoofing of a fake face image.
When the similarity between the face image of the user to be registered and each registered face image is determined, a plurality of similarities can be obtained, and then the similarity larger than a preset similarity threshold value can be screened out of the obtained plurality of similarities, and then the target registered face image corresponding to the similarity larger than the preset similarity threshold value is obtained, and then the face image of the user to be registered and the target registered face image are indicated to come from the same face, and further the face image of the user to be registered can be added into a target face image cluster to which the target registered face image belongs.
When the similarity between the face image of the user to be registered and each registered face image is not larger than the preset similarity threshold, creating a target face image cluster corresponding to the user to be registered, and adding the face image to be registered into the target face image cluster.
In this case, it is indicated that there is no face image similar to the face image of the user to be registered in the registered face images, and then the face image of the user to be registered can be considered as a new face image of an unregistered user, and then the user to be registered can be registered according to the face image of the user to be registered, and at the same time, a corresponding target face image cluster is created for the face image of the user to be registered, and the face image of the user to be registered is added to the target face image cluster.
By adopting the technical scheme, when the user to be registered is not all registered users, a new target face image cluster can be created for the user to be registered, and then the face image clusters of all registered users can be dynamically updated.
Step S34: and generating prompt information when the growth speed of the face images in the target face image cluster is greater than a preset growth speed.
The prompt information is used for prompting the processing of the face images added to the target face image cluster in a preset time period.
The procedure of this step is similar to that of the above-mentioned step S23, and the above-mentioned step S23 is referred to, and the description thereof will be omitted.
In one embodiment, after the prompt information is generated, the face images added to the target face image cluster in a preset time period may be processed in response to the prompt information, where the preset time period may be an abnormal growth time period in which the growth speed of the face images in the target face image cluster is greater than a preset growth speed. Specifically, the processing may be at least one of the following steps S35 to S37.
Step S35: and responding to the prompt information, and performing living detection on the face images added to the target face image cluster in a preset time period.
In practice, after the prompt information is generated, the human face images added to the target human face image cluster in the preset time period can be subjected to living body detection in response to the prompt information, and specifically, the living body detection can be performed by adopting the existing image living body detection technology.
Step S36: and responding to the prompt information, and sending the face images added to the target face image cluster in the preset time period to a preset terminal.
In this embodiment, the in-line living body detection may also be performed on the face image added to the target face image cluster, so as to ensure the account security of the target registered user. In specific implementation, the manual auditing prompt information can be generated and sent to the terminal equipment held by the management user, so that the management user organization can conveniently and manually detect the corresponding user in the target face image cluster.
Step S37: and responding to the prompt information, and outputting alarm information.
The alarm information comprises user information corresponding to the target face cluster so as to prompt the user to be attacked.
In this embodiment, the alarm information may include user information corresponding to the target face cluster, and the alarm information may be sent to a designated user, for example, a management user, so that the management user may further process the user information corresponding to the target face image cluster through the alarm information.
Of course, the output alarm information may be audible and visual alarm information, for example, output a buzzer alarm, or control a flashing alarm lamp, etc., so that the manager can timely perform downlink processing on the abnormal situation.
In this embodiment, when the face image of the user to be registered is detected in vivo, the detection of the living body in different embodiments may be performed according to the attack mode that needs to be prepared. The specific explanation is as follows:
embodiment a: in this embodiment, in order to enhance the ability of living organisms to detect attacks of the class of filter cards, clapping screens and face masks, living organisms can be detected using face images of the user to be registered that are indicative of the action. Specifically, the living body detection of the face image of the user to be registered may include the steps of:
step S321: and responding to a registration application sent by the user to be registered, and sending an action instruction to the user to be registered.
The registration application may be a registration application sent by the user to be registered when performing the registration operation, and the action instruction may be a randomly generated and sent instruction, where the action instruction may be used to instruct the user to be registered to perform a targeted action, such as blinking, lifting a hand, smiling, opening a mouth, shaking a head, and the like.
Step S322: and obtaining a face image of the user to be registered responding to the action instruction.
The response to the action instruction refers to an action indicated by the action instruction made by the user to be registered, and the face image of the user to be registered obtained in this case is an image under the action indicated by the user to be registered. If the user to be registered makes the mouth opening action, the obtained face image of the user to be registered can be the picture of the mouth opening of the user to be registered.
Step S323, determining whether the face image of the user to be registered matches with the action instruction.
In practice, the matching of the face image of the user to be registered with the action instruction may mean that the position shape feature of the key point in the face image is matched with the action instruction. If the action instruction is an instruction for opening the mouth, the characteristics of the mouth in the face image are in an open shape, and the mouth can be determined to be matched with the action instruction; if the mouth is characterized as closed, it can be determined that it does not match the action instructions, in which case it may be attacked by filter cards, clapping screens, and facial masks.
Step S324, when it is determined that the face image of the user to be registered matches the action instruction, it is determined that the face image of the user to be registered passes through the living body detection.
In the implementation, when the face image of the user to be registered is matched with the action instruction, the operation that the face image of the user to be registered comes from a real person can be determined, and the detection can be performed through living bodies. When the face image of the user to be registered is not matched with the action instruction, the face image of the user to be registered is determined not to be from the operation of a real person, face recognition of the user to be registered can be stopped, and meanwhile, alarm information can be generated to prompt attack.
Embodiment B: in combination with embodiment a, that is, in the case of responding to the registration application sent by the user to be registered, sending an action instruction to the user to be registered and obtaining a face image of the user to be registered responding to the action instruction, in order to improve the capability of in vivo detection on the attack against the static image, the attack of a plurality of static images or videos recorded in advance is prevented. When the face image of the user to be registered responding to the action instruction is obtained, a face video can be obtained, so that living detection can be carried out on the face video, and the method specifically comprises the following steps:
step S321', the face video of the user to be registered responding to the action instruction is obtained.
In particular implementations, the action instructions may include a plurality of sequential actions to instruct the user to be registered to sequentially perform a plurality of corresponding actions. For example, the action instructions sequentially include: low head, blink and shake head, the user to be registered needs to make the actions of low head, blink and shake head in sequence.
Accordingly, referring to fig. 4, a flowchart illustrating steps of performing in-vivo detection on the face video may specifically include the following steps:
step S322', sequentially extracting a plurality of frames of face images from the face video according to time sequence, wherein each frame of face image in the plurality of frames of face images comprises key position points.
The face video may include a plurality of frames of images in a time sequential sequence, each frame of images may have respective time stamp information, and the time stamp may represent a time sequence of the frame of images in the face video. In the embodiment of the invention, multiple frames of face images arranged according to time sequence can be sequentially extracted from the face video, wherein each frame of face image is provided with key position points, the key position points can be position points of a user, which change along with actions, on the face in the action process, and the key position points can be one or a plurality of key position points. For example, the strategic location points may be the nose, chin, cheek, or eyes.
Step S323', determining the matching degree of the position change of the key position point in the multi-frame face image according to the time sequence and the action instruction.
In practice, a face key point alignment algorithm may be adopted to determine the variation of the position of the key point in each frame of face image, and the matching degree of the variation of the position of the key point and the action instruction may be determined, where the matching degree may represent the matching degree of the variation of the position of the key point and each action in the action instruction. In practice, the matching degree may take a value between 0 and 1, and the higher the matching degree, the more the characterization key point position changes corresponding to each action in the action instruction.
In practice, when the number of the key position points is plural, the matching degree between the change of each key position point in the multi-frame face image and the action instruction may be determined, and the average of the plural matching degrees may be determined as the final matching degree.
For example, the action instructions sequentially comprise: for example, the positions of the key points are the nose and the upper eyelid, so that the position change of the nose in the multi-frame image can be respectively determined, when the position change of the nose is determined to be 0.8 in time sequence and the matching degree of the nose and the upper eyelid in the action instruction is determined to be 0.9 in time sequence, the average value of the matching degrees of the nose and the upper eyelid can be taken as the final matching degree, for example, the final determined matching degree is 0.85.
And step S324', when the matching degree is larger than the preset matching degree, determining that the face image of the user to be registered passes through the living body detection.
The method comprises the steps of setting a preset matching degree according to actual requirements, and determining that the live detection is passed when the matching degree is larger than the preset matching degree, wherein the preset matching degree can be set according to actual requirements, and the fact that the face video is shot when a real person sequentially makes actions according to action instructions is indicated.
Embodiment C: in combination with the embodiment a, that is, in the case of responding to the registration application sent by the user to be registered, sending an action instruction to the user to be registered and obtaining a face image of the user to be registered responding to the action instruction, in order to improve the capability of the face recognition technology in resisting the attack on the screen picture, in response to the registration application sent by the user to be registered, sending the action instruction to the user to be registered, and meanwhile, controlling the terminal currently logged in by the user to be registered to flash the screen color.
The flashing screen color may be that the window color is alternately changed at a window for collecting the face image on a display screen of the terminal, or that the color of a display interface is alternately changed on the display interface for collecting the face image.
Accordingly, the face image shot by the user to be registered in response to the action instruction and under the flashing screen color can be obtained.
In this embodiment, the face image of the user to be registered is a face image captured when the user to be registered responds to the action instruction and flashes the screen color. Because, when the screen color is flashed, the flashed screen color has an optical signal which can reach and be reflected by the real face in photographing, the reflection light information reflected by the flashed screen color is carried in the finally obtained face image of the user to be registered.
Accordingly, when the face image of the user to be registered is subjected to living body detection, whether the face image of the user to be registered is a living body image or not can be detected by carrying out reflection spectrum analysis of visible color light on the face image of the user to be registered. And when the face image of the user to be registered is determined to be a living body image, determining that the face image of the user to be registered passes through the living body detection.
Reflectance spectroscopy in the present invention may refer to: and obtaining a spectral image corresponding to the face image of the user to be registered, namely converting the face image of the user to be registered into the spectral image, extracting the reflectivity curve characteristic reflected by the real skin of the face in the wavebands from the spectral image according to the wavebands of the flickering screen colors, and performing spectral living body detection on the spectral image according to the reflectivity curve characteristic.
In practice, since the reflectivity of the surfaces of different objects to the same light is different, the reflectivity curve features are also different, and it can be distinguished whether the above-mentioned screen color light is reflected by a real face or by other objects, such as a photo, a mobile phone screen photo, etc.
In practice, when the reflectance curve features extracted from the spectral image are determined to be true skin, it may be determined that the face image of the user to be registered is a living body image. When the reflectivity curve features extracted from the spectrum images are judged to be not real skin, the face images of the users to be registered can be determined not to be living images, face recognition of the users to be registered can be stopped, and meanwhile, alarm information can be generated to prompt attack.
When the embodiment is adopted, living body detection of the face image can be realized by flashing screen color and carrying out reflection spectrum analysis on the face image so as to identify whether the face image of the user to be registered is the face image of the real face or not, thereby preventing attacks such as screen photos and the like.
Embodiment D: in this embodiment, in order to further improve the recognition sensitivity of the face image of the user to be registered, and improve the capability of the face recognition technology to resist attacks such as filtering cards, screen capturing and face masks, when the face image of the user to be registered is detected in vivo, the face image of the user to be registered may be detected in vivo and/or in a screen capturing manner, so as to determine whether the face image of the user to be registered is a live image. And when the face image of the user to be registered is determined to be a living body image, determining that the face image of the user to be registered passes through the living body detection.
In specific implementation, the face image of the user to be registered may be input into the trained living body face recognition model to determine whether the user is a living body image. Specifically, the living body face recognition model can be obtained by training a convolutional neural network model by using a large number of real face image samples, screen face image samples and skin mask samples as training samples. The specific training process can be referred to the existing machine learning model technology, and will not be described herein.
When the face image of the user to be registered is input to the living face recognition model, whether the face image of the user to be registered is a face image of a real face or not is recognized by the living face recognition model.
By adopting the technical scheme, the living body detection can be carried out on the face image of the user to be registered by utilizing the living body face recognition model, so that the living body detection efficiency and recognition accuracy can be improved.
Embodiment E: in this embodiment, in order to further improve the recognition sensitivity of the face image of the user to be registered, and improve the capability of the face recognition technology in resisting the attack of the image synthesis software, that is, the face recognition technology of the present invention can recognize the tampered face image, and when the face image of the user to be registered is detected in a living body, the face image of the user to be registered can be detected in an image tampering manner, so as to determine whether the face image of the user to be registered is a tampered image. And when the face image of the user to be registered is not a tampered image, determining that the face image of the user to be registered passes through the living body detection.
In specific implementation, the face image of the user to be registered can be input into the trained image falsification model, so that whether the image is falsified or not can be identified through the image falsification model. Specifically, the image tampering model may be a model obtained by inputting a large number of original face image samples and synthesized face image samples into a neural network model and training the neural network model. The specific training process can be referred to the existing machine learning model technology, and will not be described herein.
Because the image tampering model is obtained by training an original face image sample and a synthesized face image sample, the image tampering model can identify various synthesized face images, so that whether the face image of the user to be registered is a tampered image can be determined. If the image is not tampered with, the obtained image of the real face of the user to be registered can be represented.
By adopting the technical scheme, the image tampering model can be utilized to carry out living detection on the face image of the user to be registered, so that the living detection efficiency and the recognition precision can be improved.
In this embodiment, after a face image of a user to be registered is obtained, a face image of the user to be registered is detected in vivo, and after the detection is passed, the face image to be registered is added to a target face image cluster, and the growth rate of the face image in the target face image cluster is monitored, and when the growth rate is greater than a preset growth rate, prompt information is generated to prompt the processing of the face image added to the target face image cluster in a preset time period. Because the face image of the user to be registered is subjected to living body detection and is added into the target face image cluster when the living body detection passes, the attack of the non-real face on the face image cluster is avoided, the face images in the face image cluster are all from the real face, and the safety of face recognition is improved.
And as the growth speed of the face images in the target face image cluster can be monitored, and when the growth speed of the face images in the target face image cluster is greater than the preset growth speed, prompt information is generated to prompt the face images added into the target face image cluster in a preset time period to be processed. Therefore, the recognition sensitivity of whether the face image in the target face cluster is attacked is improved, and the attack on the face image can be recognized in time, so that the problem of reduced face recognition safety caused by the attack is avoided.
Referring to fig. 5, a schematic structural diagram of a face image processing apparatus according to an embodiment of the present invention is shown, and as shown in fig. 5, the apparatus is applied to a mobile terminal or a server, and may specifically include the following modules:
the face image obtaining module 51 may be configured to obtain a face image of a user to be registered;
the face image adding module 52 may be configured to add the face image of the user to be registered to a target face image cluster, where a similarity between face images belonging to the target face image cluster is greater than a preset similarity threshold;
the prompt information generating module 53 is configured to generate, when the growth speed of the face images in the target face image cluster is greater than a preset growth speed, prompt information, where the prompt information is used to prompt processing of the face images added to the target face image cluster in a preset time period.
Optionally, the face image adding module 52 includes:
the first adding module is used for adding the face image of the user to be registered into a target face image cluster to which the target registered face image belongs when the similarity between the face image of the user to be registered and the target registered face image in each registered face image is determined to be larger than the preset similarity threshold;
and the second adding module is used for creating a target face image cluster corresponding to the user to be registered and adding the face image to be registered into the target face image cluster when the similarity between the face image of the user to be registered and each registered face image is not larger than the preset similarity threshold value.
Optionally, the apparatus may further specifically include the following modules:
the living body detection module can be used for carrying out living body detection on the face image of the user to be registered;
the face image adding module is specifically configured to add the face image of the user to be registered to a target face image cluster when the face image of the user to be registered passes through living detection.
Accordingly, in case of comprising a living body detection module, optionally, the apparatus may further specifically comprise the following modules:
The action instruction sending module can be used for responding to a registration application sent by the user to be registered and sending an action instruction to the user to be registered;
the face image obtaining module is specifically configured to obtain a face image of the user to be registered in response to the action instruction;
the living body detection module can be specifically used for determining whether the face image of the user to be registered is matched with the action instruction; and when the face image of the user to be registered is determined to be matched with the action instruction, determining that the face image of the user to be registered passes through the living body detection.
Optionally, the living body detection module may specifically include the following units:
the screen capturing detection unit can be used for carrying out living body and/or screen capturing detection on the face image of the user to be registered so as to determine whether the face image of the user to be registered is a living body image or not;
the first determining unit may be configured to determine that the face image of the user to be registered is detected by the living body when determining that the face image of the user to be registered is a living body image.
Alternatively, the living body detection module may specifically include the following units:
the image tampering detection unit can be used for performing image tampering detection on the face image of the user to be registered so as to determine whether the face image of the user to be registered is a tampered image;
The second determining unit may be configured to determine that the face image of the user to be registered passes through the living body detection when it is determined that the face image of the user to be registered is not a tampered image.
Optionally, the face image obtaining module 51 may be specifically configured to obtain a face video of the user to be registered in response to the action instruction; accordingly, the living body detection module may specifically include the following units:
the face image extraction unit can be used for sequentially extracting a plurality of frames of face images from the face video according to time sequence, wherein each frame of face image in the plurality of frames of face images comprises key position points;
the matching degree determining unit can be used for determining the matching degree of the position change of the key position point in the multi-frame face image according to the time sequence and the action instruction;
and the third determining unit may be configured to determine that the face image of the user to be registered passes through the living body detection when the matching degree is greater than a preset matching degree.
Optionally, the action instruction sending module may specifically include the following units:
the color light control unit can be used for controlling the flashing screen color of the terminal which is registered by the user to be registered at present;
The face image obtaining module 51 may specifically be configured to obtain a face image that is taken by the user to be registered in response to the action instruction and in the color of the flashing screen;
accordingly, the living body detection module may specifically include the following units:
the reflection spectrum analysis unit can be used for carrying out reflection spectrum analysis of visible color light on the face image of the user to be registered so as to detect whether the face image of the user to be registered is a living body image or not;
and the fourth determining unit may be configured to determine that the face image of the user to be registered is detected by the living body when determining that the face image of the user to be registered is a living body image.
Optionally, the apparatus may further include the following modules:
the first response module is used for responding to the prompt information and performing living detection on the face images added to the target face image cluster in a preset time period;
the second response module is used for responding to the prompt information and sending the face images added to the target face image cluster in the preset time period to a preset terminal;
and the third response module is used for responding to the prompt information and outputting alarm information, wherein the alarm information comprises user information corresponding to the target face cluster so as to prompt the user to be attacked.
For the face image processing apparatus embodiment, since it is substantially similar to the face image processing method embodiment, the description is relatively simple, and the relevant points are only required to refer to part of the description of the face image processing method embodiment.
Referring to fig. 6, a schematic structural diagram of a server 600 according to an embodiment of the present invention is shown, where the server 600 may include a face image processing device 61 and a database 62, and may further include a network interface 64 and a data interface 63. A plurality of face image clusters of registered users may be stored in the database 62, each face image cluster comprising a plurality of face images of the same registered user, and the face image processing means 61 may be adapted to perform the face image processing method. Specifically, the face image processing apparatus may be an apparatus combining software and hardware, the hardware may include physical keys, the physical keys may be used to provide functions of return, confirmation, etc., and the software includes an application program; the face image processing apparatus may cooperate with the database 62 through software and hardware to implement the face image processing method described in the above embodiment.
Referring to fig. 7, a schematic structural diagram of a terminal device 700 according to an embodiment of the present invention is shown, and as shown in fig. 7, the terminal device 700 may include: radio frequency unit 701, network module 702, audio output unit 703, input unit 704, sensor 705, display unit 706, user input unit 707, interface unit 708, memory 709, processor 710, and power supply 711. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 7 is not limiting of the electronic device and that the electronic device may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. In the embodiment of the invention, the electronic equipment comprises, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 701 may be used for receiving and transmitting signals during the process of receiving and transmitting information or communication, specifically, receiving downlink data from a base station, and then processing the received downlink data by the processor 710; and, the uplink data is transmitted to the base station. Typically, the radio unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio unit 701 may also communicate with networks and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 702, such as helping the user to send and receive e-mail, browse web pages, and access streaming media, etc.
The audio output unit 703 may convert audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into an audio signal and output as sound. Also, the audio output unit 703 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the terminal device 700. The audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
The input unit 704 is used for receiving an audio or video signal. The input unit 704 may include a graphics processor (Graphics Processing Unit, GPU) 7051 and a microphone 7042, the graphics processor 7051 processing image data of still pictures or video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 706. The image frames processed by the graphics processor 7051 may be stored in memory 709 (or other storage medium) or transmitted via the radio unit 701 or the network module 702. The microphone 7042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 701 in the case of a telephone call mode.
The terminal device 700 further comprises at least one sensor 705, such as a light sensor, a motion sensor and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 7071 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 7071 and/or the backlight when the terminal device 700 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the gesture of the electronic equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 705 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., and will not be described again here.
The display unit 706 is used to display information input by a user or information provided to the user. The display unit 706 may include a display panel 7071, and the display panel 7071 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 707 is operable to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 7071 or thereabout using any suitable object or accessory such as a finger, stylus, etc.). The touch panel 7071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 710, and receives and executes commands sent from the processor 710. In addition, the touch panel 7071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 707 may include other input devices 7072 in addition to the touch panel 7071. In particular, other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Further, the touch panel 7071 may be overlaid on the display panel 7071, and when the touch panel 7071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 710 to determine the type of the touch event, and then the processor 710 provides a corresponding visual output on the display panel 7071 according to the type of the touch event. Although in fig. 6, the touch panel 7071 and the display panel 7071 are two independent components for implementing the input and output functions of the electronic device, in some embodiments, the touch panel 7071 and the display panel 7071 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 708 is an interface to which an external device is connected to the terminal apparatus 700. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 708 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 700 or may be used to transmit data between the terminal apparatus 700 and an external device.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 709 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 710 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 709 and calling data stored in the memory 709, thereby performing overall monitoring of the electronic device. Processor 710 may include one or more processing units; preferably, the processor 710 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 710.
The terminal device 700 may further include a power supply 711 (e.g., a battery) for supplying power to the respective components, and preferably, the power supply 711 may be logically connected to the processor 710 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system.
In addition, the terminal device 700 includes some functional modules, which are not shown, and will not be described herein.
The embodiment of the invention also provides a mobile terminal which can be a smart phone, a notebook computer, a tablet computer or other intelligent equipment convenient to carry, and the mobile terminal can comprise a face image processing device, wherein the face image processing device is used for executing each process of the face image processing method embodiment and can achieve the same technical effect, and the repetition is avoided, so that the description is omitted.
The embodiment of the invention also provides electronic equipment, which comprises: the processor, the memory, store the computer program on the memory and can run on the processor, this computer program realizes each process of the above-mentioned facial image processing method embodiment when being carried out by the processor, and can reach the same technical effect, in order to avoid repetition, will not be repeated here.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, realizes the processes of the above-mentioned face image processing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (20)

1. A face image processing method, the method comprising:
obtaining a face image of a user to be registered; the user to be registered is a user registered on a business client needing face recognition;
adding the face images of the user to be registered into a target face image cluster, wherein the similarity among face images belonging to the target face image cluster is larger than a preset similarity threshold, and the target face image cluster is one face image cluster in a plurality of registered face image clusters;
generating prompt information when the growth speed of the face images in the target face image cluster is larger than a preset growth speed, wherein the prompt information is used for prompting the processing of the face images added into the target face image cluster in a preset time period, performing anti-attack processing on part of the face images added into the target face image cluster in an abnormal growth time period and performing protective processing on part of the face images added into the target face image cluster in a normal growth time period;
Adding the face image of the user to be registered to a target face image cluster comprises the following steps:
when the similarity between the face image of the user to be registered and the target registered face image in all the registered face images is determined to be larger than the preset similarity threshold, adding the face image of the user to be registered into a target face image cluster to which the target registered face image belongs;
when the similarity between the face image of the user to be registered and each registered face image is not larger than the preset similarity threshold value, creating a target face image cluster corresponding to the user to be registered, and adding the face image to be registered into the created target face image cluster.
2. The method according to claim 1, wherein after obtaining the face image of the user to be registered, the method further comprises:
performing living body detection on the face image of the user to be registered;
adding the face image of the user to be registered to a target face image cluster comprises the following steps:
and when the face image of the user to be registered passes through living detection, adding the face image of the user to be registered into a target face image cluster.
3. The method of claim 2, wherein prior to obtaining the face image of the user to be registered, the method further comprises:
responding to a registration application sent by the user to be registered, and sending an action instruction to the user to be registered;
obtaining a face image of a user to be registered, including:
obtaining a face image of the user to be registered in response to the action instruction;
performing living body detection on the face image of the user to be registered, including:
determining whether the face image of the user to be registered is matched with the action instruction;
and when the face image of the user to be registered is matched with the action instruction, determining that the face image of the user to be registered passes through the living body detection.
4. The method according to claim 2, wherein the performing the living body detection on the face image of the user to be registered includes:
performing living body and/or screen capturing detection on the face image of the user to be registered to determine whether the face image of the user to be registered is a living body image or not;
and when the face image of the user to be registered is determined to be a living body image, determining that the face image of the user to be registered passes through the living body detection.
5. The method according to claim 2, wherein the performing the living body detection on the face image of the user to be registered includes:
performing image tampering detection on the face image of the user to be registered to determine whether the face image of the user to be registered is a tampered image;
and when the face image of the user to be registered is not a tampered image, determining that the face image of the user to be registered passes through the living body detection.
6. A method according to claim 3, wherein obtaining a face image of the user to be registered in response to the action instruction, further comprises: acquiring a face video of the user to be registered in response to the action instruction;
performing living body detection on the face image of the user to be registered, including:
sequentially extracting a plurality of frames of face images from the face video according to time sequence, wherein each frame of face image in the plurality of frames of face images comprises key position points;
determining the matching degree of the position change of the key position points in the multi-frame face image according to the time sequence and the action instruction;
and when the matching degree is larger than a preset matching degree, determining that the face image of the user to be registered passes through the living body detection.
7. The method of claim 3, wherein in response to a registration request sent by the user to be registered, sending an action instruction to the user to be registered, further comprises:
controlling the flashing screen color of the terminal which is currently logged in by the user to be registered;
obtaining a face image of the user to be registered in response to the action instruction, including:
obtaining a face image shot by the user to be registered in response to the action instruction under the flashing screen color;
performing living body detection on the face image of the user to be registered, including:
carrying out reflection spectrum analysis of visible color light on the face image of the user to be registered so as to detect whether the face image of the user to be registered is a living body image or not;
and when the face image of the user to be registered is determined to be a living body image, determining that the face image of the user to be registered passes through the living body detection.
8. The method of any one of claims 1-7, further comprising at least one of:
responding to the prompt information, and performing living detection on face images added to the target face image cluster in a preset time period;
Responding to the prompt information, and sending the face images added to the target face image cluster in the preset time period to a preset terminal;
and responding to the prompt information, and outputting alarm information, wherein the alarm information comprises user information corresponding to the target face image cluster so as to prompt the user to be attacked.
9. A face image processing apparatus, the apparatus comprising:
the face image acquisition module is used for acquiring a face image of a user to be registered; the user to be registered is a user registered on a business client needing face recognition;
the face image adding module is used for adding the face images of the user to be registered into a target face image cluster, wherein the similarity among face images belonging to the target face image cluster is larger than a preset similarity threshold, and the target face image cluster is one face image cluster in a plurality of registered face image clusters;
the prompt information generation module is used for generating prompt information when the growth speed of the face images in the target face image cluster is greater than a preset growth speed, wherein the prompt information is used for prompting the processing of the face images added into the target face image cluster in a preset time period, the attack prevention processing of partial face images added into the target face image cluster in an abnormal growth time period and the protective processing of partial face images added into the target face image cluster in a normal growth time period;
The face image adding module comprises:
the first adding module is used for adding the face image of the user to be registered into a target face image cluster to which the target registered face image belongs when the similarity between the face image of the user to be registered and the target registered face image in each registered face image is determined to be larger than the preset similarity threshold;
and the second adding module is used for creating a target face image cluster corresponding to the user to be registered and adding the face image to be registered into the target face image cluster when the similarity between the face image of the user to be registered and each registered face image is not larger than the preset similarity threshold value.
10. The apparatus of claim 9, wherein the apparatus further comprises:
the living body detection module is used for carrying out living body detection on the face image of the user to be registered;
the face image adding module is specifically configured to add the face image of the user to be registered to a target face image cluster when the face image of the user to be registered passes through living detection.
11. The apparatus of claim 10, wherein the apparatus further comprises:
The action instruction sending module is used for responding to a registration application sent by the user to be registered and sending an action instruction to the user to be registered;
the face image obtaining module is specifically configured to obtain a face image of the user to be registered in response to the action instruction;
the living body detection module is specifically configured to determine whether the face image of the user to be registered is matched with the action instruction; and when the face image of the user to be registered is determined to be matched with the action instruction, determining that the face image of the user to be registered passes through the living body detection.
12. The apparatus of claim 10, wherein the biopsy module comprises:
the screen shooting detection unit is used for performing living body and/or screen shooting detection on the face image of the user to be registered so as to determine whether the face image of the user to be registered is a living body image or not;
and the first determining unit is used for determining that the face image of the user to be registered passes through the living body detection when the face image of the user to be registered is determined to be the living body image.
13. The apparatus of claim 10, wherein the biopsy module comprises:
The image tampering detection unit is used for performing image tampering detection on the face image of the user to be registered so as to determine whether the face image of the user to be registered is a tampered image or not;
and a second determining unit configured to determine that the face image of the user to be registered passes through the living body detection when it is determined that the face image of the user to be registered is not a tampered image.
14. The apparatus according to claim 11, wherein the face image obtaining module is specifically configured to obtain a face video of the user to be registered in response to the action instruction;
the living body detection module includes:
the face image extraction unit is used for sequentially extracting a plurality of frames of face images from the face video according to time sequence, wherein each frame of face image in the plurality of frames of face images comprises key position points;
the matching degree determining unit is used for determining the matching degree of the position change of the key position point in the multi-frame face image according to the time sequence and the action instruction;
and a third determining unit, configured to determine that the face image of the user to be registered passes through the living body detection when the matching degree is greater than a preset matching degree.
15. The apparatus of claim 11, wherein the action instruction sending module comprises:
the color light control unit is used for controlling the flashing screen color of the terminal which is currently logged in by the user to be registered;
the face image obtaining module is specifically configured to obtain a face image that is shot by the user to be registered in response to the action instruction and under the color of the flickering screen;
the living body detection module includes:
the reflection spectrum analysis unit is used for carrying out reflection spectrum analysis of visible color light on the face image of the user to be registered so as to detect whether the face image of the user to be registered is a living body image or not;
and a fourth determining unit, configured to determine that the face image of the user to be registered is detected by the living body when determining that the face image of the user to be registered is a living body image.
16. The apparatus of claim 9, wherein the apparatus further comprises:
the first response module is used for responding to the prompt information and performing living detection on the face images added to the target face image cluster in a preset time period;
the second response module is used for responding to the prompt information and sending the face images added to the target face image cluster in the preset time period to a preset terminal;
And the third response module is used for responding to the prompt information and outputting alarm information, wherein the alarm information comprises user information corresponding to the target face image cluster so as to prompt the user to be attacked.
17. A server comprising a face image processing device and a database, wherein the database stores a plurality of face image clusters, each face image cluster comprises a plurality of face images of the same user, and the face image processing device is configured to execute the face image processing method according to any one of claims 1-8.
18. A mobile terminal comprising a face image processing device for performing the face image processing method of any one of claims 1-8.
19. An electronic device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor implements the face image processing method according to any one of claims 1-8.
20. A computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, which when executed by a processor, implements a face image processing method according to any one of claims 1-9.
CN202010398944.4A 2020-05-12 2020-05-12 Face image processing method, device, server, terminal, equipment and medium Active CN111723655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010398944.4A CN111723655B (en) 2020-05-12 2020-05-12 Face image processing method, device, server, terminal, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010398944.4A CN111723655B (en) 2020-05-12 2020-05-12 Face image processing method, device, server, terminal, equipment and medium

Publications (2)

Publication Number Publication Date
CN111723655A CN111723655A (en) 2020-09-29
CN111723655B true CN111723655B (en) 2024-03-08

Family

ID=72564390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010398944.4A Active CN111723655B (en) 2020-05-12 2020-05-12 Face image processing method, device, server, terminal, equipment and medium

Country Status (1)

Country Link
CN (1) CN111723655B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723243A (en) * 2021-08-20 2021-11-30 南京华图信息技术有限公司 Thermal infrared image face recognition method for wearing mask and application

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8620315B1 (en) * 2006-09-29 2013-12-31 Yahoo! Inc. Multi-tiered anti-abuse registration for a mobile device user
CN105488495A (en) * 2016-01-05 2016-04-13 上海川织金融信息服务有限公司 Identity identification method and system based on combination of face characteristics and device fingerprint
CN105808988A (en) * 2014-12-31 2016-07-27 阿里巴巴集团控股有限公司 Method and device for identifying exceptional account
CN106339615A (en) * 2016-08-29 2017-01-18 北京红马传媒文化发展有限公司 Abnormal registration behavior recognition method, system and equipment
CN106657007A (en) * 2016-11-18 2017-05-10 北京红马传媒文化发展有限公司 Method for recognizing abnormal batch ticket booking behavior based on DBSCAN model
CN107066983A (en) * 2017-04-20 2017-08-18 腾讯科技(上海)有限公司 A kind of auth method and device
CN107835154A (en) * 2017-10-09 2018-03-23 武汉斗鱼网络科技有限公司 A kind of batch registration account recognition methods and system
CN108229120A (en) * 2017-09-07 2018-06-29 北京市商汤科技开发有限公司 Face unlock and its information registering method and device, equipment, program, medium
CN108446387A (en) * 2018-03-22 2018-08-24 百度在线网络技术(北京)有限公司 Method and apparatus for updating face registration library
CN108491813A (en) * 2018-03-29 2018-09-04 百度在线网络技术(北京)有限公司 Method and apparatus for fresh information
CN108629260A (en) * 2017-03-17 2018-10-09 北京旷视科技有限公司 Live body verification method and device and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8620315B1 (en) * 2006-09-29 2013-12-31 Yahoo! Inc. Multi-tiered anti-abuse registration for a mobile device user
CN105808988A (en) * 2014-12-31 2016-07-27 阿里巴巴集团控股有限公司 Method and device for identifying exceptional account
CN105488495A (en) * 2016-01-05 2016-04-13 上海川织金融信息服务有限公司 Identity identification method and system based on combination of face characteristics and device fingerprint
CN106339615A (en) * 2016-08-29 2017-01-18 北京红马传媒文化发展有限公司 Abnormal registration behavior recognition method, system and equipment
CN106657007A (en) * 2016-11-18 2017-05-10 北京红马传媒文化发展有限公司 Method for recognizing abnormal batch ticket booking behavior based on DBSCAN model
CN108629260A (en) * 2017-03-17 2018-10-09 北京旷视科技有限公司 Live body verification method and device and storage medium
CN107066983A (en) * 2017-04-20 2017-08-18 腾讯科技(上海)有限公司 A kind of auth method and device
WO2018192406A1 (en) * 2017-04-20 2018-10-25 腾讯科技(深圳)有限公司 Identity authentication method and apparatus, and storage medium
CN108229120A (en) * 2017-09-07 2018-06-29 北京市商汤科技开发有限公司 Face unlock and its information registering method and device, equipment, program, medium
CN107835154A (en) * 2017-10-09 2018-03-23 武汉斗鱼网络科技有限公司 A kind of batch registration account recognition methods and system
CN108446387A (en) * 2018-03-22 2018-08-24 百度在线网络技术(北京)有限公司 Method and apparatus for updating face registration library
CN108491813A (en) * 2018-03-29 2018-09-04 百度在线网络技术(北京)有限公司 Method and apparatus for fresh information

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Method for detecting abnormal behaviour of users based on selective clustering ensemble;Juan Du et al.;IET Networks;20180301;全文 *
人脸识别中视频回放假冒攻击的实时检测方法;孙霖;潘纲;;电路与系统学报;20100415(02);全文 *
基于大数据安全分析的网络安全技术发展趋势研究;宁海斌;网络信息安全;20160630(第290期);全文 *
多维度消费人群分析及产品推荐系统;刘丽萍;黄晓娜;杨珊;潘家辉;;计算机系统应用;20200315(03);全文 *
王光琼 著.C++程序设计基础教程.成都:电子科技大学出版社,2019,第338页. *
陈振国 著.物联网环境下信任模型及其应用研究.北京:北京交通大学出版社:清华大学出版社,2019,第117-118页. *

Also Published As

Publication number Publication date
CN111723655A (en) 2020-09-29

Similar Documents

Publication Publication Date Title
CN110321790B (en) Method for detecting countermeasure sample and electronic equipment
WO2017181769A1 (en) Facial recognition method, apparatus and system, device, and storage medium
CN109034102A (en) Human face in-vivo detection method, device, equipment and storage medium
CN108830062B (en) Face recognition method, mobile terminal and computer readable storage medium
CN109381165B (en) Skin detection method and mobile terminal
CN109492550B (en) Living body detection method, living body detection device and related system applying living body detection method
CN108875468B (en) Living body detection method, living body detection system, and storage medium
CN109255620B (en) Encryption payment method, mobile terminal and computer readable storage medium
CN108206892B (en) Method and device for protecting privacy of contact person, mobile terminal and storage medium
CN108549802A (en) A kind of unlocking method, device and mobile terminal based on recognition of face
CN109525837B (en) Image generation method and mobile terminal
CN110807405A (en) Detection method of candid camera device and electronic equipment
CN108462826A (en) A kind of method and mobile terminal of auxiliary photo-taking
CN112241657A (en) Fingerprint anti-counterfeiting method and electronic equipment
CN110765924A (en) Living body detection method and device and computer-readable storage medium
CN108174012A (en) A kind of authority control method and mobile terminal
CN109544172A (en) A kind of display methods and terminal device
CN108629280A (en) Face identification method and mobile terminal
CN110516488A (en) A kind of barcode scanning method and mobile terminal
CN111723655B (en) Face image processing method, device, server, terminal, equipment and medium
CN108109188B (en) Image processing method and mobile terminal
CN109918944A (en) A kind of information protecting method, device, mobile terminal and storage medium
CN109639981A (en) A kind of image capturing method and mobile terminal
CN110443752B (en) Image processing method and mobile terminal
CN108960097B (en) Method and device for obtaining face depth information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant