CN111274899B - Face matching method, device, electronic equipment and storage medium - Google Patents

Face matching method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111274899B
CN111274899B CN202010042427.3A CN202010042427A CN111274899B CN 111274899 B CN111274899 B CN 111274899B CN 202010042427 A CN202010042427 A CN 202010042427A CN 111274899 B CN111274899 B CN 111274899B
Authority
CN
China
Prior art keywords
face image
matching degree
face
matching
database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010042427.3A
Other languages
Chinese (zh)
Other versions
CN111274899A (en
Inventor
何吉波
谭北平
谭志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Beijing Mininglamp Software System Co ltd
Original Assignee
Tsinghua University
Beijing Mininglamp Software System Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Beijing Mininglamp Software System Co ltd filed Critical Tsinghua University
Priority to CN202010042427.3A priority Critical patent/CN111274899B/en
Publication of CN111274899A publication Critical patent/CN111274899A/en
Application granted granted Critical
Publication of CN111274899B publication Critical patent/CN111274899B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Collating Specific Patterns (AREA)

Abstract

According to the face matching method, the face matching device, the electronic equipment and the storage medium, after the matching degree of the first characteristic information and the second characteristic information prestored in the first database is calculated, whether the matching degree meets the requirement is judged for each matching degree, the first characteristic information with the matching degree being larger than a first preset value is stored in the second database, whether the first characteristic information stored in the second database is effective is judged, so that the first characteristic with the matching degree being larger than the first preset value is subjected to second analysis, and the face recognition accuracy is improved.

Description

Face matching method, device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a face matching method, a face matching device, an electronic device, and a storage medium.
Background
In the prior art, face recognition is a biological recognition technology for carrying out identity recognition based on facial feature information of people. The method comprises the steps of collecting images or video streams containing human faces by using a camera or a camera, automatically detecting and tracking the human faces in the images or the video, further extracting identity features contained in each human face according to the detected human faces, and comparing the identity features with the human faces with known identities so as to identify the identity of each human face.
At present, the accuracy of the recognition result of the face recognition technology is not high, and recognition errors or unrecognizable situations often occur.
Disclosure of Invention
In view of the foregoing, an object of the present application is to provide a face matching method, apparatus, electronic device, and storage medium, so as to improve accuracy of recognition results of face recognition.
In a first aspect, an embodiment provides a face matching method, including:
extracting features of a face image to be identified to obtain first feature information of the face image to be identified;
calculating the matching degree of the first characteristic information and the second characteristic information of each face image pre-stored in a first database;
judging whether the matching degree larger than a first preset value exists or not, and if so, storing the first characteristic information into a second database; if the matching degree is not present, judging whether the matching degree is greater than a second preset value, if the matching degree is present, judging that the matching is successful, and if the matching degree is not present, returning to the step of executing calculation of the matching degree of the first characteristic information and the second characteristic information of each face image pre-stored in the first database until the matching degree is judged to be greater than the first preset value, until the matching times reach the preset times, or until the matching degree is greater than the first preset value or greater than the second preset value, wherein the second preset value is smaller than the first preset value;
judging whether first characteristic information in the second database is effective, and if so, storing the first characteristic information into the first database.
In an alternative embodiment, feature extraction is performed on a face image to be identified to obtain first feature information of the face image to be identified, including:
and extracting the characteristics of the face image to be identified through a trained deep convolutional neural network algorithm so as to obtain a plurality of first characteristic values of the face image to be identified.
In an alternative embodiment, the method further comprises a step of pre-storing the second characteristic information to the first database, the step comprising:
and extracting the characteristics of the face images through a trained deep convolutional neural network algorithm to obtain a plurality of second characteristic values of each face image, and storing the second characteristic values of each face image into a first database.
In an optional embodiment, calculating the matching degree between the first feature information and the second feature information of each face image pre-stored in the first database includes:
comparing each first characteristic value of the face image to be identified with each second characteristic value of a plurality of second characteristic values corresponding to each face image pre-stored in a first database, and calculating the number of the first characteristic values which are the same as the second characteristic value;
and calculating the ratio of the first characteristic values in the number to obtain the matching degree.
In an alternative embodiment, before extracting the features of the face image to be identified, the method further includes:
and acquiring a face image, converting the acquired face image into a gray image, and taking the gray image as the face image to be identified.
In an alternative embodiment, before extracting the features of the face image to be identified, the method further includes:
acquiring a face image, and converting the acquired face image into a gray level image;
noise filtering is carried out on the gray level image;
and performing light compensation operation on the gray level image after noise filtering, and taking the image after the light compensation operation as a face image to be identified.
In a second aspect, an embodiment provides a face matching apparatus, the apparatus including:
the feature extraction module is used for carrying out feature extraction on the face image to be identified so as to obtain first feature information of the face image to be identified;
the matching degree calculating module is used for calculating the matching degree of the first characteristic information and the second characteristic information of each face image prestored in the first database;
the first judging module is used for judging whether the matching degree larger than a first preset value exists or not, and if so, the first characteristic information is stored in the second database; if the matching degree is not present, judging whether the matching degree is greater than a second preset value, if the matching degree is present, judging that the matching is successful, and if the matching degree is not present, returning to the step of executing calculation of the matching degree of the first characteristic information and the second characteristic information of each face image pre-stored in the first database until the matching degree is judged to be greater than the first preset value, until the matching times reach the preset times, or until the matching degree is greater than the first preset value or greater than the second preset value, wherein the second preset value is smaller than the first preset value;
and the second judging module is used for judging whether the first characteristic information in the second database is effective, and if so, storing the first characteristic information into the first database.
In an alternative embodiment, the feature extraction module is specifically configured to:
and extracting the characteristics of the face image to be identified through a trained deep convolutional neural network algorithm so as to obtain a plurality of first characteristic values of the face image to be identified.
In a third aspect, an embodiment provides an electronic device, including a processor and a nonvolatile memory storing computer instructions that, when executed by the processor, perform the face matching method of any one of the foregoing embodiments.
In a fourth aspect, an embodiment provides a storage medium having stored therein a computer program that when executed implements the face matching method of any one of the foregoing embodiments.
According to the face matching method, the face matching device, the electronic equipment and the storage medium, after the matching degree of the first characteristic information and the second characteristic information prestored in the first database is calculated, whether the matching degree meets the requirement is judged for each matching degree, the first characteristic information with the matching degree being larger than a first preset value is stored in the second database, whether the first characteristic information stored in the second database is effective is judged, so that the first characteristic with the matching degree being larger than the first preset value is subjected to second analysis, and the face recognition accuracy is improved.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 2 is one of flowcharts of a face matching method provided in an embodiment of the present application;
FIG. 3 is a second flowchart of a face matching method according to an embodiment of the present disclosure;
FIG. 4 is a flow chart of substeps of step S220 provided in an embodiment of the present application;
fig. 5 is a functional block diagram of a face matching device provided in an embodiment of the present application.
Description of main reference numerals: 100-an electronic device; 110-face matching means; 120-memory; 130-a processor; 1101-feature extraction module; 1102-a matching degree calculation module; 1103-a first judgment module; 1104-a second determination module.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
First, referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present disclosure. The face matching method provided in the embodiment of the present application is applied to the electronic device 100, where the electronic device 100 includes a processor 130, a memory 120, and a face matching device 110, and each element of the memory 120 and each element of the processor 130 are electrically connected directly or indirectly to each other, so as to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The face matching device 110 includes at least one software function module that may be stored in the memory 120 in the form of software or firmware (firmware) or cured in an Operating System (OS) of the electronic device 100. The processor 130 is configured to execute executable modules stored in the memory 120, such as software functional modules and computer programs included in the face matching device 110. The electronic device 100 may be, but is not limited to, a wearable device, a smart phone, a tablet computer, a personal digital assistant, or the like.
The Memory 120 may be, but is not limited to, a random access Memory (Random Access Memory, RAM), a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc. The memory 120 is configured to store a program, and the processor 130 executes the program after receiving an execution instruction.
The processor 130 may be an integrated circuit chip with signal processing capabilities. The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The face matching method provided by the embodiment of the application is described in detail below. Referring to fig. 2, fig. 2 is a flowchart of a face matching method according to an embodiment of the present application, which is applied to the electronic device 100 in fig. 1, and the method includes the following steps:
step S210, extracting features of the face image to be identified to obtain first feature information of the face image to be identified.
Step S220, calculating the matching degree of the first characteristic information and the second characteristic information of each face image pre-stored in the first database.
Step S230, judging whether the matching degree larger than a first preset value exists, if so, storing the first characteristic information into a second database, and if not, judging whether the matching degree larger than a second preset value exists; if the matching is successful, the step of executing calculation of the matching degree of the first characteristic information and the second characteristic information of each face image prestored in the first database until the matching times reach the preset times or the matching degree larger than the first preset value or the second preset value is carried out.
Wherein the second preset value is smaller than the first preset value.
Step S240, judging whether the first characteristic information in the second database is valid, and if so, storing the first characteristic information into the first database.
Face recognition is a biological recognition technology for carrying out identity recognition based on facial feature information of people. A series of related technologies, commonly referred to as image recognition and face recognition, are used to capture images or video streams containing faces with a camera or cameras, and automatically detect and track the faces in the images, thereby recognizing the detected faces. The method mainly comprises three processes of face detection, feature extraction and face matching. The face snapshot acquisition unit is responsible for completing the acquisition of face information, including face photos, passing scene photos, videos and the like. After the face information is acquired, the face information is subjected to feature extraction and is matched with the pre-stored face information.
In the above steps, when recognizing a face image, feature extraction is first required to be performed on the obtained face image to obtain first feature information of the face image to be recognized. The first face information is unique identification of different face images, and as each face has unique characteristics, the first characteristic information obtained after the characteristic extraction of each face image to be identified is different.
After the first characteristic information of the face image to be recognized is extracted, calculating the matching degree of the first characteristic information of the face image to be recognized and the second characteristic information of each face image prestored in the first database, and judging whether the matching degree is larger than a first preset value according to each calculated matching.
If the matching degree is larger than a first preset value, storing the corresponding first characteristic information into a second database; if the number of times of the cycle matching reaches the preset number of times, or the matching degree is larger than the first preset value or the second preset value.
For example, in one implementation of the present embodiment, the first preset value may be 95% and the second preset value may be 85%. If 4 face images are stored in the first database, calculating the matching degree of the first characteristic information of the face image to be recognized and the second characteristic information of each face image, and obtaining four matching degrees.
Judging whether the matching degree is larger than a first preset value (for example, 95%) or not in the four matching degrees, if yes, storing the first characteristic information into a second database, if no, judging whether the matching degree is larger than a second preset value (for example, 85%), if yes, judging that the matching is successful, if no, repeating the steps, recalculating the matching degree of the first characteristic information of the face image to be identified and the second characteristic information of each face image to obtain four matching degrees, and judging the matching degree until the repeated matching times reach the preset times or the matching degree larger than the first preset value or the matching degree larger than the second preset value exists.
When the matching degree is greater than the second preset value, it can be determined that the face image corresponding to the first feature information and the face image pre-stored in the first database corresponding to the matching degree greater than the second preset value (for example, 85%) are faces of the same user, that is, the matching is successful, and subsequent operations such as payment, door opening or business card punching and the like can be performed.
When the matching times reach the preset times, if the matching degree larger than the second preset value or larger than the first preset value does not exist, outputting a result which cannot be matched, for example, displaying "cannot be matched" or "failed matching" and the like, and avoiding entering a dead cycle.
The matching times are set for multiple matching, so that the error of a matching result caused by sporadic errors of a face recognition algorithm can be avoided, and the accuracy of the face recognition result is improved.
The expressions of the same person acquired at different moments are not likely to be identical, so that the matching degree of the face image to be recognized and the pre-stored face image of the same person is not particularly high. When the matching degree is greater than a first preset value (for example, 95%), face verification using a photograph or video of a certain person is possible, and thus it is necessary whether or not the first feature information in this case is a valid face image of a real person. And judging whether the first characteristic information in the second database is valid or not, and if so, storing the first characteristic information into the first database. If the matching is invalid, judging that the matching is failed, and outputting a prompt that the matching cannot be performed or the matching is failed.
And the accuracy of face recognition is improved by carrying out secondary analysis on the first characteristic information with the matching degree larger than the first preset value.
Optionally, in this embodiment, step S210 performs feature extraction on a face image to be identified to obtain first feature information of the face image to be identified, and specifically includes:
and extracting the characteristics of the face image to be identified through a trained deep convolutional neural network algorithm so as to obtain a plurality of first characteristic values of the face image to be identified.
In this embodiment, in order for the electronic device 100 to be able to distinguish between different face images, feature extraction needs to be performed on each face image, and the electronic device 100 can calculate the matching degree through the first feature information and the second feature information.
Specifically, the face image to be recognized can be encoded through a trained deep convolutional neural network algorithm, so that a plurality of first characteristic values of the face image to be recognized are obtained.
In this step, before the feature extraction of the face image to be identified, the deep convolutional neural network algorithm needs to be trained, so that the algorithm can perform feature extraction on the face image after training.
Before training, a large number of training sets are required to be prepared, each training set at least comprises three face images, wherein the first face image and the second face image can be face images of the same person in different states, and the third face image is face images of another person. In the training process, a plurality of characteristic values are respectively generated for three face images through a deep convolutional neural network algorithm, and the characteristic values of the first face image are close to or the same as the characteristic values of the second face image and are greatly different from the characteristic values of the third face image through adjusting network parameters of the algorithm.
The training is carried out on a large number of training sets, and the network parameters of the algorithm are continuously adjusted until the training is completed, namely, a plurality of characteristic values of a first face image are close to or the same as a plurality of characteristic values of a second face image and have larger differences with the characteristic values of a third face image, so that the trained deep convolutional neural network algorithm is obtained. After training is completed, feature extraction can be performed on the face image through the deep convolutional neural network algorithm, and a plurality of first feature values are obtained.
Optionally, referring to fig. 3, fig. 3 is a second flowchart of a face matching method according to an embodiment of the present application. In this embodiment, the face matching method further includes:
step S209, pre-storing the second characteristic information in the first database.
Specifically, in the present embodiment, step S209 specifically includes: and extracting the characteristics of the face images through a trained deep convolutional neural network algorithm to obtain a plurality of second characteristic values of each face image, and storing the second characteristic values of each face image into a first database.
When face matching is carried out, the currently acquired face image to be identified is required to be matched with the face image in the pre-stored first database, so that whether the face image matched with the currently acquired face image exists or not is judged. The pre-stored face image is the face of the user with corresponding authority which is input in advance.
In this embodiment, the specific matching method may be matching the matching degree of the first feature information and the second feature information obtained after feature extraction.
Therefore, when the user is logged in, not only the face image of the user is required to be logged in, but also the feature extraction is required to be carried out on the face image so as to obtain a plurality of second feature information. Specifically, feature extraction can be performed on each face image to be pre-stored through a trained deep convolutional neural network, so that a plurality of second feature values of each face image are obtained, and the second feature values and the face images are stored in a first database correspondingly, so that subsequent matching degree calculation operation can be performed.
Specifically, referring to fig. 4, fig. 4 is a flowchart illustrating the substeps of step S220 according to the embodiment of the present application. In this embodiment, step S220 calculates a matching degree between the first feature information and the second feature information of each face image pre-stored in the first database, including:
in sub-step S2201, each first feature value of the face image to be recognized is compared with each second feature value of the plurality of second feature values corresponding to each face image pre-stored in the first database, and the number of first feature values identical to the second feature value is calculated.
In a substep S2202, the ratio of the number of first feature values to the plurality of first feature values is calculated to obtain the degree of matching.
In the above substep, after feature extraction is performed on the face image to be identified by the deep convolutional neural network algorithm, a plurality of first feature values, for example, 128 first feature values, may be obtained.
When the number of the face images stored in the first database is 4, each face image includes a plurality of second feature values (for example, 128 second feature values) which are the same as the number of the first feature values. When the matching degree is calculated, the 128 first characteristic values are respectively compared with the 128 second characteristic values, and the same number of the first characteristic values and the second characteristic values is calculated, for example, the number of the first characteristic values which are the same as the first characteristic values is 115 when 115 first characteristic values are the same as 115 values in the 128 second characteristic values in the 128 first characteristic values, and the number of the first characteristic values which are the same as the second characteristic values is 115.
Thereby calculating the degree of matching according to the formula. The specific formula is as follows:
where α is the matching degree, x is the number of the first eigenvalues identical to the second eigenvalue value, and y is the total number of the first eigenvalues.
In the above illustration, x=115, y=128, thenTherefore, the matching degree is 89.9%, and the matching degree of the first characteristic information of the face image to be recognized and the face information of each pre-stored first database can be calculated through the formula.
In another implementation of the present embodiment, the euclidean distance may also be calculated, and the euclidean distance may be used as the matching degree. Specifically, the euclidean distance calculation formula may be:
wherein x is n Refers to the nth first characteristic value, y n Refers to the nth second eigenvalue, and d (x, y) refers to the euclidean distance of the first eigenvalue from the second eigenvalue. The smaller the Euclidean distance is, the higher the matching degree between the first face feature and the second face feature is, and on the contrary, the larger the Euclidean distance is, the lower the matching degree between the first face feature and the second face feature is.
The euclidean distance of different sizes may be regarded as the matching degree of different sizes, and for example, when the euclidean distance is 0.05, the matching degree is regarded as 95%, and when the euclidean distance is 0.85, the matching degree is regarded as 15%.
It should be understood that the above description is merely illustrative of a specific correspondence between euclidean distance and matching degree, and in other embodiments of the present embodiment, there may be other correspondence between euclidean distance and matching degree.
Optionally, in this embodiment, before step S210, the face matching method further includes:
and acquiring a face image, converting the acquired face image into a gray image, and taking the gray image as the face image to be identified.
When the face image is acquired, the face image of the user can be acquired through the camera, the face image is converted into a gray image, the gray image is used as the face image to be identified for feature extraction operation, the data size of the image can be reduced, the size of the whole image can be reduced, the feature extraction speed is improved, and the time for feature extraction is shortened.
Optionally, in this embodiment, before step S210, the face matching method further includes:
acquiring a face image, and converting the acquired face image into a gray level image; noise filtering is carried out on the gray level image; and performing light compensation operation on the gray level image after noise filtering, and taking the image after the light compensation operation as a face image to be identified.
The image is disturbed by random signals during acquisition or transmission, so that random, discrete, isolated pixels appear on the image, which interfere with subsequent feature extraction operations. Therefore, in this step, after the obtained face image is converted into the gray image, noise filtering may be performed on the gray image, and the interference pixel points in the image may be filtered through noise filtering, so that the subsequent feature extraction result is more accurate.
When the image is acquired, the acquired image containing the face information may be uneven in light due to angle, and some places on the image may have darker light and some places have more light, so that subsequent feature extraction is affected. Therefore, in this step, after noise filtering is performed on the gray-scale image, a light compensation operation may be performed, where the light compensation operation may compensate for an influence of too bright light or too dark light on the image, so that a result of subsequent feature extraction is more accurate, thereby improving a final recognition accuracy of face recognition.
In summary, the accuracy of face recognition is improved by performing the second analysis on the first characteristic information with the matching degree larger than the first preset value.
Meanwhile, before the feature extraction, the face image can be preprocessed through operations such as noise filtering and light compensation, so that the face image can more clearly display the face, and the feature extraction is facilitated.
The matching times are set for multiple times, so that the error of a matching result caused by sporadic errors in a face recognition algorithm can be avoided, and the accuracy of the face recognition result can be improved.
Referring to fig. 5, fig. 5 is a functional block diagram of a face matching device 110 according to an embodiment of the present application. The face matching device 110 is applied to the electronic device 100 in fig. 1, and the device includes the following modules:
the feature extraction module 1101 is configured to perform feature extraction on a face image to be identified, so as to obtain first feature information of the face image to be identified.
And the matching degree calculating module 1102 is configured to calculate a matching degree between the first feature information and second feature information of each face image pre-stored in the first database.
A first determining module 1103, configured to determine, for each calculated matching degree, whether the matching degree is greater than a first preset value, and if so, store the first feature information to a second database; if the matching degree is not greater than the first preset value, judging whether the matching degree is greater than a second preset value, if the matching degree is greater than the second preset value, judging that the matching is successful, and if the matching degree is not greater than the second preset value, returning to the step of executing calculation of the matching degree of the first characteristic information and the second characteristic information of each face image pre-stored in the first database until the matching degree is greater than the first preset value, until the matching times reach the preset times, or the matching degree is greater than the first preset value or the second preset value, wherein the second preset value is smaller than the first preset value.
A second determining module 1104, configured to determine whether the first feature information in the second database is valid, and if so, store the first feature information in the first database.
Optionally, in this embodiment, the feature extraction module 1101 is specifically configured to:
and extracting the characteristics of the face image to be identified through a trained deep convolutional neural network algorithm so as to obtain a plurality of first characteristic values of the face image to be identified.
Optionally, in this embodiment, the matching degree calculating module 1102 is specifically configured to:
comparing each first characteristic value of the face image to be identified with each second characteristic value of a plurality of second characteristic values corresponding to each face image pre-stored in a first database, and calculating the number of the first characteristic values which are the same as the second characteristic value; and calculating the ratio of the first characteristic values in the number to obtain the matching degree.
The face matching device 110 provided in the embodiment of the present application may be specific hardware on the electronic device 100 or software or firmware installed on the electronic device 100. The device provided in the embodiments of the present application has the same implementation principle and technical effects as those of the foregoing method embodiments, and for a brief description, reference may be made to corresponding matters in the foregoing method embodiments where the device embodiment section is not mentioned. It will be clear to those skilled in the art that, for convenience and brevity, the specific operation of the system, apparatus and unit described above may refer to the corresponding process in the above method embodiment, which is not described in detail herein.
The embodiment of the present application further provides an electronic device 100, including a processor 130 and a nonvolatile memory 120 storing computer instructions, where when the computer instructions are executed by the processor 130, the electronic device 100 executes the face matching method described above, and specific implementation steps may refer to corresponding processes in the foregoing method embodiments, which are not described herein again.
The embodiment of the application further provides a storage medium, in which a computer program is stored, the computer program is executed to perform the face matching method, and specific implementation steps may refer to corresponding processes in the foregoing method embodiment, which is not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments provided in the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be noted that: like reference numerals and letters in the following figures denote like items, and thus once an item is defined in one figure, no further definition or explanation of it is required in the following figures, and furthermore, the terms "first," "second," "third," etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present application, and are not intended to limit the scope of the present application, but the present application is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, the present application is not limited thereto. Any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or make equivalent substitutions for some of the technical features within the technical scope of the disclosure of the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the corresponding technical solutions. Are intended to be encompassed within the scope of this application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (7)

1. A face matching method, comprising:
extracting features of a face image to be identified to obtain first feature information of the face image to be identified; the method comprises the steps that first characteristic information obtained after characteristic extraction of each face image to be identified is different; the feature extraction of the face image to be identified to obtain first feature information of the face image to be identified includes:
extracting features of a face image to be recognized through a trained deep convolutional neural network algorithm to obtain a plurality of first feature values of the face image to be recognized;
calculating the matching degree of the first characteristic information and the second characteristic information of each face image pre-stored in a first database;
judging whether the matching degree larger than a first preset value exists or not, and if so, storing the first characteristic information into a second database; if the matching degree is not present, judging whether the matching degree is greater than a second preset value, if the matching degree is present, judging that the matching is successful, and if the matching degree is not present, returning to the step of executing calculation of the matching degree of the first characteristic information and the second characteristic information of each face image pre-stored in the first database until the matching degree is judged to be greater than the first preset value, until the matching times reach the preset times, or until the matching degree is greater than the first preset value or greater than the second preset value, wherein the second preset value is smaller than the first preset value;
judging whether first characteristic information in the second database is effective or not, if so, storing the first characteristic information into the first database, and if not, judging that the matching is failed; the method further comprises the step of pre-storing the second characteristic information to the first database, the step comprising: extracting features of the face images through a trained deep convolutional neural network algorithm to obtain a plurality of second feature values of each face image, and storing the second feature values of each face image into a first database; the depth convolution neural network algorithm is trained through a training set, network parameters of the depth convolution neural network algorithm are adjusted to obtain the training set, the training set comprises at least three face images, wherein a first face image and a second face image are face images of the same person in different states, and a third face image is a face image of other person; the network parameter is used for enabling the characteristic values of the first face image to be the same as the characteristic values of the second face image and different from the characteristic values of the third face image.
2. The method according to claim 1, wherein calculating the matching degree of the first feature information with the second feature information of each face image pre-stored in the first database includes:
comparing each first characteristic value of the face image to be identified with each second characteristic value of a plurality of second characteristic values corresponding to each face image pre-stored in a first database, and calculating the number of the first characteristic values which are the same as the second characteristic value;
and calculating the ratio of the first characteristic values in the number to obtain the matching degree.
3. The method according to claim 1, wherein before feature extraction is performed on the face image to be identified, the method further comprises:
and acquiring a face image, converting the acquired face image into a gray image, and taking the gray image as the face image to be identified.
4. The method according to claim 1, wherein before feature extraction is performed on the face image to be identified, the method further comprises:
acquiring a face image, and converting the acquired face image into a gray level image;
noise filtering is carried out on the gray level image;
and performing light compensation operation on the gray level image after noise filtering, and taking the image after the light compensation operation as a face image to be identified.
5. A face matching device, the device comprising:
the feature extraction module is used for carrying out feature extraction on the face image to be identified so as to obtain first feature information of the face image to be identified; the method comprises the steps that first characteristic information obtained after characteristic extraction of each face image to be identified is different;
the feature extraction module is specifically configured to:
extracting features of a face image to be recognized through a trained deep convolutional neural network algorithm to obtain a plurality of first feature values of the face image to be recognized;
the matching degree calculating module is used for calculating the matching degree of the first characteristic information and the second characteristic information of each face image prestored in the first database;
the first judging module is used for judging whether the matching degree larger than a first preset value exists or not, and if so, the first characteristic information is stored in the second database; if the matching degree is not present, judging whether the matching degree is greater than a second preset value, if the matching degree is present, judging that the matching is successful, and if the matching degree is not present, returning to the step of executing calculation of the matching degree of the first characteristic information and the second characteristic information of each face image pre-stored in the first database until the matching degree is judged to be greater than the first preset value, until the matching times reach the preset times, or until the matching degree is greater than the first preset value or greater than the second preset value, wherein the second preset value is smaller than the first preset value;
the second judging module is used for judging whether the first characteristic information in the second database is valid or not, if so, the first characteristic information is stored in the first database, and if not, the matching is judged to be failed;
the feature extraction module is further used for pre-storing second feature information into a first database;
the feature extraction module is specifically configured to perform feature extraction on a plurality of face images through a trained deep convolutional neural network algorithm, so as to obtain a plurality of second feature values of each face image, and store the plurality of second feature values of each face image into a first database; the depth convolution neural network algorithm is trained through a training set, network parameters of the depth convolution neural network algorithm are adjusted to obtain the training set, the training set comprises at least three face images, wherein a first face image and a second face image are face images of the same person in different states, and a third face image is a face image of other person; the network parameter is used for enabling the characteristic values of the first face image to be the same as the characteristic values of the second face image and different from the characteristic values of the third face image.
6. An electronic device comprising a processor and a non-volatile memory storing computer instructions that, when executed by the processor, perform the face matching method of any one of claims 1-4.
7. A storage medium having stored therein a computer program which when executed implements the face matching method of any one of claims 1-4.
CN202010042427.3A 2020-01-15 2020-01-15 Face matching method, device, electronic equipment and storage medium Active CN111274899B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010042427.3A CN111274899B (en) 2020-01-15 2020-01-15 Face matching method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010042427.3A CN111274899B (en) 2020-01-15 2020-01-15 Face matching method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111274899A CN111274899A (en) 2020-06-12
CN111274899B true CN111274899B (en) 2024-03-26

Family

ID=71001060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010042427.3A Active CN111274899B (en) 2020-01-15 2020-01-15 Face matching method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111274899B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115331346B (en) * 2022-08-30 2024-02-13 深圳市巨龙创视科技有限公司 Campus access control management method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577789A (en) * 2012-07-26 2014-02-12 中兴通讯股份有限公司 Detection method and device
CN106845345A (en) * 2016-12-15 2017-06-13 重庆凯泽科技股份有限公司 Biopsy method and device
WO2017181769A1 (en) * 2016-04-21 2017-10-26 腾讯科技(深圳)有限公司 Facial recognition method, apparatus and system, device, and storage medium
CN108345780A (en) * 2018-02-11 2018-07-31 维沃移动通信有限公司 A kind of solution lock control method and mobile terminal
CN108875713A (en) * 2018-08-02 2018-11-23 台州市金算子知识产权服务有限公司 Face identification method, device, storage medium and electronic equipment
CN109002804A (en) * 2018-07-25 2018-12-14 浙江威步机器人技术有限公司 Face identification method, device, storage medium and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577789A (en) * 2012-07-26 2014-02-12 中兴通讯股份有限公司 Detection method and device
WO2017181769A1 (en) * 2016-04-21 2017-10-26 腾讯科技(深圳)有限公司 Facial recognition method, apparatus and system, device, and storage medium
CN106845345A (en) * 2016-12-15 2017-06-13 重庆凯泽科技股份有限公司 Biopsy method and device
CN108345780A (en) * 2018-02-11 2018-07-31 维沃移动通信有限公司 A kind of solution lock control method and mobile terminal
CN109002804A (en) * 2018-07-25 2018-12-14 浙江威步机器人技术有限公司 Face identification method, device, storage medium and electronic equipment
CN108875713A (en) * 2018-08-02 2018-11-23 台州市金算子知识产权服务有限公司 Face identification method, device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN111274899A (en) 2020-06-12

Similar Documents

Publication Publication Date Title
CN109086691B (en) Three-dimensional face living body detection method, face authentication and identification method and device
US10635946B2 (en) Eyeglass positioning method, apparatus and storage medium
CN110443110B (en) Face recognition method, device, terminal and storage medium based on multipath camera shooting
CN109670383B (en) Video shielding area selection method and device, electronic equipment and system
Debiasi et al. PRNU variance analysis for morphed face image detection
WO2020244071A1 (en) Neural network-based gesture recognition method and apparatus, storage medium, and device
CN110390229B (en) Face picture screening method and device, electronic equipment and storage medium
CN110532746B (en) Face checking method, device, server and readable storage medium
CN111931548B (en) Face recognition system, method for establishing face recognition data and face recognition method
CN111814776B (en) Image processing method, device, server and storage medium
CN111639653A (en) False detection image determining method, device, equipment and medium
WO2017131870A1 (en) Decoy-based matching system for facial recognition
US11164327B2 (en) Estimation of human orientation in images using depth information from a depth camera
CN111274899B (en) Face matching method, device, electronic equipment and storage medium
CN113837006B (en) Face recognition method and device, storage medium and electronic equipment
CN113158773B (en) Training method and training device for living body detection model
Dimitrov et al. Creation of Biometric System of Identification by Facial Image
CN110688926B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN108805883B (en) Image segmentation method, image segmentation device and electronic equipment
CN113239738B (en) Image blurring detection method and blurring detection device
CN111222485A (en) 3D face recognition method and device, electronic equipment and storage medium
EP4105825A1 (en) Generalised anomaly detection
CN114898475A (en) Underground personnel identity identification method and device, electronic equipment and readable storage medium
CN114663930A (en) Living body detection method and device, terminal equipment and storage medium
CN112183454A (en) Image detection method and device, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant