CN109145737B - Rapid face recognition method and device, electronic equipment and storage medium - Google Patents

Rapid face recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109145737B
CN109145737B CN201810788722.6A CN201810788722A CN109145737B CN 109145737 B CN109145737 B CN 109145737B CN 201810788722 A CN201810788722 A CN 201810788722A CN 109145737 B CN109145737 B CN 109145737B
Authority
CN
China
Prior art keywords
sample
face image
target
input face
key points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810788722.6A
Other languages
Chinese (zh)
Other versions
CN109145737A (en
Inventor
李中伟
白金川
赵宗亚
朱永涛
任武
蒋文帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Plain Public Intellectual Property Operation And Management Co ltd
Original Assignee
Xinxiang Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinxiang Medical University filed Critical Xinxiang Medical University
Priority to CN201810788722.6A priority Critical patent/CN109145737B/en
Publication of CN109145737A publication Critical patent/CN109145737A/en
Application granted granted Critical
Publication of CN109145737B publication Critical patent/CN109145737B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a rapid face recognition method, which comprises the following steps: acquiring a sample set and position information of an input face image; obtaining a sample set and descriptor information of an input face image by utilizing the gradient direction distribution characteristic of neighborhood pixels of the key points; calculating the energy values of key points of the sample set and the input face image: all key points of the input face image are respectively traversed through all key points of the sample set, and all obtained matched key points are recorded as target key points; determining a target sample according to the target key point; and finding out the best matching sample with the input face image from the target samples according to the Euclidean distance. The invention also discloses a rapid face recognition device, electronic equipment and a computer readable storage medium. The invention can reduce the time complexity in the face matching process, shorten the time spent on face recognition and achieve the aim of rapid face recognition.

Description

Rapid face recognition method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of face recognition technologies, and in particular, to a method and an apparatus for fast face recognition, an electronic device, and a storage medium.
Background
In the scenes of video monitoring, face recognition entrance guard, face recognition unlocking and the like, face recognition plays an important role. The automatic visitor identification and judgment of intrusion or unlocking of strangers are realized through the face identification, and automatic alarm is realized.
The existing face recognition system is generally realized by a Scale Invariant Feature Transform (SIFT) algorithm, and the SIFT algorithm has strong matching capability for processing translation, rotation, affine transformation and the like of an object, and based on the point, many people begin to try to apply the face recognition system to the field of face recognition. But because the face is elasticThe human face of (2) is easy to cause instability of the feature points due to facial expression change, uneven illumination and the like, so that the matching of the feature points is difficult to carry out. Moreover, the SIFT algorithm has superior matching performance in object recognition, depending on the multi-dimensional information of the key points. However, when the number of key points is increased greatly and the database samples are increased to a certain order of magnitude, the algorithm time complexity is increased greatly, for example, a human face with 68 key points has 128-dimensional information for each key point. Storing n samples in a database, and if matching is performed by adopting a mode of calculating Euclidean distances of feature points one by one and then sequencing is performed by utilizing fast rank, the time complexity is O (n)2Log (n)), and the requirements of face recognition on time performance are high, which obviously cannot meet the requirements of people on time.
Disclosure of Invention
In order to overcome the defects of the prior art, an object of the present invention is to provide a fast face recognition method, which can reduce the time complexity in the face matching process, shorten the time spent on face recognition, and achieve the purpose of fast face recognition.
The second objective of the present invention is to provide a fast face recognition device, which can reduce the time complexity in the face matching process, shorten the time spent on face recognition, and achieve the purpose of fast face recognition.
The invention also provides an electronic device for implementing the rapid face recognition method.
It is a fourth object of the present invention to provide a computer readable storage medium storing the above-mentioned fast face recognition method.
One of the purposes of the invention is realized by adopting the following technical scheme:
a quick face recognition method comprises the following steps:
positioning key points by using an active appearance model to obtain a sample set and position information of an input face image;
the position information of the sample set is:
Xi=[xi1,xi2,...,xi68,yi1,yi2,...,yi68]T
the position information of the input face image is as follows:
Y=[x1,x2,...,x68,y1,y2,...,y68]T
wherein, XiI is more than or equal to 1 and less than or equal to n, n is the number of samples in the sample set, and Y is the position information of the input face image; x is the number ofijAnd yijThe j is more than or equal to 1 and less than or equal to 68, and x is the position information of the jth key point which jointly forms the ith samplejAnd yjThe position information of the jth key point of the input face image is formed together;
obtaining a 128-dimensional feature vector of each key point by using the gradient direction distribution characteristics of the neighborhood pixels of the key points so as to obtain a sample set and descriptor information of an input face image;
Figure BDA0001734295770000021
Figure BDA0001734295770000022
wherein, PiDescriptor information for the ith sample;
Figure BDA0001734295770000023
a feature vector of a j-th key point of an ith sample; q is descriptor information of the input face image,
Figure BDA0001734295770000031
the feature vector of the jth key point of the input face image is obtained;
Figure BDA0001734295770000032
Figure BDA0001734295770000033
pijkas feature vectors
Figure BDA0001734295770000034
The k-dimensional element of (1), qjkAs feature vectors
Figure BDA0001734295770000035
J is more than or equal to 1 and less than or equal to 128 in the kth dimension element;
calculating the energy values of key points of the sample set and the input face image:
Figure BDA0001734295770000036
Figure BDA0001734295770000037
Eijis the energy value of the j-th key point of the ith sample, EijThe energy value of the j key point of the ith sample; ejThe energy value of the jth key point of the input face image is obtained;
for the j-th key point of the input face image, traversing all key points of the sample set, and searching in [ E ]j-w,Ej+w]Key points of the sample set within the range, the energy value is [ E ]j-w,Ej+w]The key points of the sample set in the range are recorded as the matching key points of the jth key point of the input face image; all key points of the input face image are respectively traversed through all key points of the sample set, and all obtained matched key points are recorded as target key points;
determining a target sample according to the target key point, wherein the target sample is part or all of the samples in the sample set, and descriptor information of the target sample is recorded as P';
finding out the best matching sample with the input face image from the target sample according to the Euclidean distance:
Figure BDA0001734295770000038
d is the minimum Euclidean distance value between the target sample and the input face image, WjThe weight value of the jth key point is a set value, and the jth key point of the sample set and the input face image has the same weight value; p'ajkAnd a is greater than or equal to 1 and less than or equal to m, m is the number of the target samples, m is greater than or equal to 1 and less than or equal to n, and the target sample corresponding to the minimum Euclidean distance value D is the best matching sample.
Preferably, the determining the target sample according to the target key point includes: and taking the samples in all the sample sets corresponding to the target key points as target samples.
Preferably, the determining the target sample according to the target key point includes: all the key points corresponding to the target sample are located in the target key points.
Preferably, the determining the target sample according to the target key point includes: and the number of the key points corresponding to the target sample positioned at the target key points reaches a preset number.
Preferably, the method further comprises the following steps: and comparing the minimum Euclidean distance value D with a preset threshold, if the minimum Euclidean distance value D is larger than the preset threshold, the matching is unsuccessful, the best matching sample corresponding to the input face image does not exist in the sample set, and if the minimum Euclidean distance value D is not larger than the preset threshold, the target sample corresponding to the minimum Euclidean distance value D is the best matching sample.
The second purpose of the invention is realized by adopting the following technical scheme:
a fast face recognition apparatus comprising:
the key point positioning module is used for positioning key points by utilizing the active appearance model to acquire the sample set and the position information of the input face image;
the position information of the sample set is:
Xi=[xi1,xi2,...,xi68,yi1,yi2,...,yi68]T
the position information of the input face image is as follows:
Y=[x1,x2,...,x68,y1,y2,...,y68]T
wherein, XiI is more than or equal to 1 and less than or equal to n, n is the number of samples in the sample set, and Y is the position information of the input face image; x is the number ofijAnd yijThe j is more than or equal to 1 and less than or equal to 68, and x is the position information of the jth key point which jointly forms the ith samplejAnd yjThe position information of the jth key point of the input face image is formed together;
the descriptor information acquisition module is used for acquiring a 128-dimensional feature vector of each key point by utilizing the gradient direction distribution characteristics of the neighborhood pixels of the key points so as to obtain a sample set and descriptor information of the input face image;
Figure BDA0001734295770000051
Figure BDA0001734295770000052
wherein, PiDescriptor information for the ith sample;
Figure BDA0001734295770000053
a feature vector of a j-th key point of an ith sample; q is descriptor information of the input face image,
Figure BDA0001734295770000054
the feature vector of the jth key point of the input face image is obtained;
Figure BDA0001734295770000055
Figure BDA0001734295770000056
pijkas feature vectors
Figure BDA0001734295770000057
The k-dimensional element of (1), qjkAs feature vectors
Figure BDA0001734295770000058
J is more than or equal to 1 and less than or equal to 128 in the kth dimension element;
the energy calculation module is used for calculating the energy values of key points of the sample set and the input face image:
Figure BDA0001734295770000059
Figure BDA00017342957700000510
Eijis the energy value of the j-th key point of the ith sample, EijThe energy value of the j key point of the ith sample; ejThe energy value of the jth key point of the input face image is obtained;
a traversing module for traversing all the key points of the sample set for the jth key point of the input face image and searching in [ E ]j-w,Ej+w]Key points of the sample set within the range, the energy value is [ E ]j-w,Ej+w]The key points of the sample set in the range are recorded as the matching key points of the jth key point of the input face image; all key points of the input face image are respectively traversed through all key points of the sample set, and all obtained matched key points are recorded as target key points;
the target sample determining module is used for determining a target sample according to the target key point, wherein the target sample is part or all of the samples in the sample set, and the descriptor information of the target sample is marked as P';
the matching module is used for finding out the best matching sample with the input face image from the target samples according to the Euclidean distance:
Figure BDA0001734295770000061
d is the minimum Euclidean distance value between the target sample and the input face image, WjThe weight value of the jth key point is a set value, and the jth key point of the sample set and the input face image has the same weight value; p'ajkAnd a is greater than or equal to 1 and less than or equal to m, m is the number of the target samples, m is greater than or equal to 1 and less than or equal to n, and the target sample corresponding to the minimum Euclidean distance value D is the best matching sample.
The third purpose of the invention is realized by adopting the following technical scheme:
an electronic device, comprising: one or more processors; and a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the fast face recognition method which is one of the objects of the present invention.
The fourth purpose of the invention is realized by adopting the following technical scheme:
a computer-readable storage medium, on which a computer program is stored which, when executed by a processor, implements a fast face recognition method which is one of the objects of the present invention.
Compared with the prior art, the invention has the beneficial effects that:
the method comprises the steps of positioning key points through an Active Appearance Model (AAM), then obtaining descriptor information of a sample set and an input face image through an SIFT algorithm, screening through an energy mode, constructing a target sample (formed by a plurality of samples in the sample set), obtaining an optimal matching sample through the Euclidean distance between the input face image and the target sample, and completing a face matching process.
Drawings
Fig. 1 is a flowchart of a fast face recognition method according to a first embodiment of the present invention;
fig. 2 is a schematic structural diagram of a fast face recognition apparatus according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and the detailed description, and it should be noted that any combination of the embodiments or technical features described below can be used to form a new embodiment without conflict.
Example one
Referring to fig. 1, a fast face recognition method includes the following steps:
110. positioning key points by using an active appearance model to obtain a sample set and position information of an input face image;
the position information of the sample set is:
Xi=[xi1,xi2,...,xi68,yi1,yi2,...,yi68]T
the position information of the input face image is as follows:
Y=[x1,x2,...,x68,y1,y2,...,y68]T
wherein, XiI is more than or equal to 1 and less than or equal to n, n is the number of samples in the sample set, and Y is the position information of the input face image; x is the number ofijAnd yijThe j is more than or equal to 1 and less than or equal to 68, and x is the position information of the jth key point which jointly forms the ith samplejAnd yjAnd the position information of the jth key point of the input face image is formed together.
The method specifically comprises the following steps:
aligning sample pictures: selecting a proper face image training sample set, marking characteristic points (including eyebrows, eyes, a nose, a mouth, a chin and a face contour), representing the shape of the face by using 68 characteristic points for each sample in the sample set, wherein the characteristic points are called key points, the position information of each key point is two-dimensional, the samples can be represented as follows, and the samples can be represented as follows:
Xi=[xi1,xi2,...,xi68,yi1,yi2,...,yi68]T
to align two samples, the control parameter to be processed has offset information T ═ Tx,Ty]TScale information α, and a rotation factor θ. After the adjustment processing of the three parameters, the Euclidean distance between two samples is minimized. Wherein the rotation is performed by aligning two sample shapes x1、x2Written in 68 x 2 form, and then for x1 Tx2Singular value decomposition is carried out to calculate the optimal rotation matrix VUTIt is a general expression of x1Aligned to x after rotating by theta angle2
Figure BDA0001734295770000081
And the alignment training algorithm is as follows: for each sample in the training, the effect of the position offset is eliminated. One sample is arbitrarily chosen as the estimate of the mean of the samples (e.g., the first sample in the sample set). The dimensions and rotational orientation of all samples in the sample set are then aligned to the sample average. The sample mean is recalculated. If there is no convergence, one sample is again selected as the sample average until convergence.
Creating a model: all samples are aligned by the alignment operation and the AAM model is then created. The creation of the AAM model includes three stages, shape modeling, texture modeling, and hybrid modeling. The objective is to find suitable model parameters to make the model instance and the sample instance as consistent as possible.
Fitting calculation: the current AAM model instance is generated by controlling the change in position of the shape control points by fitting the parameter set of the calculation change model. When the input image is matched with the current face model example to minimize the energy function, the position of the shape control point of the current face model example actually describes the position of the feature point of the input face image, which is equivalent to the aim of positioning the face feature point of the input face image, and finally the position information of the input face image is obtained as follows:
Y=[x1,x2,...,x68,y1,y2,...,y68]T
120. and describing the distribution information of the key points according to the gradient direction of the neighborhood pixels by using an SIFT descriptor, and obtaining a 128-dimensional feature vector of each key point so as to obtain a descriptor information of a sample set and an input face image.
Figure BDA0001734295770000091
Figure BDA0001734295770000092
Wherein, PiDescriptor information for the ith sample;
Figure BDA0001734295770000093
a feature vector of a j-th key point of an ith sample; q is descriptor information of the input face image,
Figure BDA0001734295770000094
the feature vector of the jth key point of the input face image is obtained;
Figure BDA0001734295770000095
Figure BDA0001734295770000096
pijkas feature vectors
Figure BDA0001734295770000097
The k-dimensional element of (1), qjkAs feature vectors
Figure BDA0001734295770000098
J is more than or equal to 1 and less than or equal to 128 in the kth dimension element;
130. calculating the energy values of key points of the sample set and the input face image:
Figure BDA0001734295770000099
Figure BDA00017342957700000910
Eijis the energy value of the j-th key point of the ith sample, EijThe energy value of the j key point of the ith sample; ejThe energy value of the j-th key point of the input face image is obtained.
140. For the j-th key point of the input face image, traversing all key points of the sample set, and searching in [ E ]j-w,Ej+w]Key points of the sample set within the range, the energy value is [ E ]j-w,Ej+w]The key points of the sample set in the range are recorded as the matching key points of the jth key point of the input face image (wherein w is a deviation set value); and traversing all key points of the input face image respectively through all key points of the sample set, and recording all obtained matched key points as target key points.
150. And determining a target sample according to the target key point, wherein the target sample is part or all of the samples in the sample set, and the descriptor information of the target sample is marked as P'.
The target sample may be screened in any one of the following three ways:
1. the samples in all sample sets corresponding to the target key points are used as target samples, the target samples obtained in the mode are the most comprehensive, the best matching samples cannot be omitted, and certain influence can be caused on the time spent on subsequent calculation.
2. All the key points corresponding to the target samples are located in the target key points, that is, the target key points are divided according to the samples where the target key points are located to form a plurality of target key point sets, if the number of a certain target key point set or certain target key point sets is 68, the samples corresponding to the target key point set or the target key point sets are the target samples, the number of the target samples obtained in the mode is the minimum, even none, and if the samples exist, the samples are probably the final best matching samples.
Thus, in this way, it is possible to proceed in the following manner:
searching for 68 target samples meeting the number of the key points in the target key point set, if one sample exists, acquiring the number of the target samples, if one sample exists, determining the target sample as the best matching sample without the subsequent Euclidean distance calculation operation, and if the number of the target samples is more than one, performing the Euclidean distance calculation again, so that the time complexity of the calculation is greatly reduced; if the target sample cannot be acquired in this method, the target sample is determined in the method 1 or the method 3.
3. And the number of the key points corresponding to the target sample positioned at the target key points reaches a preset number. Similar to the method 2, the target keypoints are divided into a plurality of target keypoint sets according to samples where the target keypoints are located, and if the number of one or some target keypoint sets reaches a preset number, for example, 40, 45, and the like, the samples corresponding to the one or some target keypoint sets are the target samples. In this way, the preset number can be set according to the requirement on time, and the larger the preset number is, the lower the time complexity is, and the best matching sample is possibly screened out, whereas, the smaller the preset number is, the higher the time complexity is, and the best matching sample is not easily screened out, and when the preset number is respectively 1 and 68, the screening processes of the mode 1 and the mode 2 are respectively corresponded.
160. Finding out the best matching sample with the input face image from the target sample according to the Euclidean distance:
Figure BDA0001734295770000111
d is the minimum Euclidean distance value between the target sample and the input face image, WjThe weight value of the jth key point is a set value, and the jth key point of the sample set and the input face image has the same weight value; p'ajkThe method comprises the steps of comparing a with a preset threshold value for a kth element of a jth key point of an a-th target sample, wherein a is more than or equal to 1 and less than or equal to m, m is the number of the target samples, m is more than or equal to 1 and less than or equal to n, comparing a minimum Euclidean distance value D with the preset threshold value, if the minimum Euclidean distance value D is larger than the preset threshold value, matching is unsuccessful, a best matching sample corresponding to an input face image does not exist in a sample set, if the minimum Euclidean distance value D is not larger than the preset threshold value, matching is successful, the target sample corresponding to the minimum Euclidean distance value D is the best matching sample, face recognition is completed, and after matching is successful, next-step operation can be carried out, such as target tracking, door access opening, mobile phone unlocking and the like.
For a database with n samples, the time for searching any one key point energy value is log (n), the time for calculating the matching distance of each key point of m target samples is m × 68 × 128, and the time for rapidly sequencing the m matching distances to find the minimum Euclidean distance value is m × log (m), obviously, the last two terms are constants, so the total time complexity is O (log (n)). That is to say, the recognition speed of the face recognition method is much faster than that of the existing operation mode.
Example two
Referring to fig. 2, the present invention further provides a fast face recognition apparatus, which is a virtual apparatus of the fast face recognition method according to the foregoing embodiment, and the fast face recognition apparatus includes:
a key point positioning module 210, configured to perform key point positioning using the active appearance model to obtain a sample set and position information of an input face image;
the position information of the sample set is:
Xi=[xi1,xi2,...,xi68,yi1,yi2,...,yi68]T
the position information of the input face image is as follows:
Y=[x1,x2,...,x68,y1,y2,...,y68]T
wherein, XiI is more than or equal to 1 and less than or equal to n, n is the number of samples in the sample set, and Y is the position information of the input face image; x is the number ofijAnd yijThe j is more than or equal to 1 and less than or equal to 68, and x is the position information of the jth key point which jointly forms the ith samplejAnd yjThe position information of the jth key point of the input face image is formed together;
the descriptor information obtaining module 220 is configured to obtain a 128-dimensional feature vector of each key point by using a gradient direction distribution characteristic of a neighborhood pixel of the key point, so as to obtain a sample set and descriptor information of an input face image;
Figure BDA0001734295770000121
Figure BDA0001734295770000122
wherein, PiDescriptor information for the ith sample;
Figure BDA0001734295770000123
a feature vector of a j-th key point of an ith sample; q is descriptor information of the input face image,
Figure BDA0001734295770000124
the feature vector of the jth key point of the input face image is obtained;
Figure BDA0001734295770000125
Figure BDA0001734295770000126
pijkas feature vectors
Figure BDA0001734295770000127
The k-dimensional element of (1), qjkAs feature vectors
Figure BDA0001734295770000128
J is more than or equal to 1 and less than or equal to 128 in the kth dimension element;
an energy calculating module 230, configured to calculate key point energy values of the sample set and the input face image:
Figure BDA0001734295770000129
Figure BDA00017342957700001210
Eijis the energy value of the j-th key point of the ith sample, EijThe energy value of the j key point of the ith sample; ejThe energy value of the jth key point of the input face image is obtained;
a traversing module 240, configured to traverse all the key points of the sample set for the jth key point of the input face image, and search for the key point [ E ]j-w,Ej+w]Key points of the sample set within the range, the energy value is [ E ]j-w,Ej+w]The key points of the sample set in the range are recorded as the matching key points of the jth key point of the input face image; all key points of the input face image are respectively traversed through all key points of the sample set, and all obtained matched key points are recorded as target key points;
a target sample determining module 250, configured to determine a target sample according to the target key point, where the target sample is a part or all of a sample set, and descriptor information of the target sample is denoted as P';
a matching module 260, configured to find a best matching sample with the input face image from the target samples according to the euclidean distance:
Figure BDA0001734295770000131
d is the minimum Euclidean distance value between the target sample and the input face image, WjThe weight value of the jth key point is a set value, and the jth key point of the sample set and the input face image has the same weight value; p'ajkAnd a is greater than or equal to 1 and less than or equal to m, m is the number of the target samples, m is greater than or equal to 1 and less than or equal to n, and the target sample corresponding to the minimum Euclidean distance value D is the best matching sample.
EXAMPLE III
Fig. 3 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention, as shown in fig. 3, the electronic device includes a processor 310, a memory 320, an input device 330, and an output device 340; the number of the processors 310 in the computer device may be one or more, and one processor 310 is taken as an example in fig. 3; the processor 310, the memory 320, the input device 330 and the output device 340 in the electronic apparatus may be connected by a bus or other means, and the connection by the bus is exemplified in fig. 3.
The memory 320 may be used as a computer-readable storage medium for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the fast face recognition method in the embodiment of the present invention (e.g., the keypoint location module 210, the descriptor information acquisition module 220, the energy calculation module 230, the traversal module 240, the target sample determination module 250, and the matching module 260 in the fast face recognition apparatus). The processor 310 executes various functional applications and data processing of the electronic device by executing software programs, instructions and modules stored in the memory 320, so as to implement the above-mentioned fast face recognition method.
The memory 320 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 320 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 320 may further include memory located remotely from the processor 310, which may be connected to the electronic device through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 330 may be used to receive input user identity information. The output device 340 may include a display device such as a display screen.
Example four
A fourth embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a method for fast face recognition, where the method includes:
positioning key points by using an active appearance model to obtain a sample set and position information of an input face image;
the position information of the sample set is:
Xi=[xi1,xi2,...,xi68,yi1,yi2,...,yi68]T
the position information of the input face image is as follows:
Y=[x1,x2,...,x68,y1,y2,...,y68]T
wherein, XiI is more than or equal to 1 and less than or equal to n, n is the number of samples in the sample set, and Y is the position information of the input face image; x is the number ofijAnd yijThe j is more than or equal to 1 and less than or equal to 68, and x is the position information of the jth key point which jointly forms the ith samplejAnd yjThe position information of the jth key point of the input face image is formed together;
obtaining a 128-dimensional feature vector of each key point by using the gradient direction distribution characteristics of the neighborhood pixels of the key points so as to obtain a sample set and descriptor information of an input face image;
Figure BDA0001734295770000151
Figure BDA0001734295770000152
wherein, PiDescriptor information for the ith sample;
Figure BDA0001734295770000153
a feature vector of a j-th key point of an ith sample; q is descriptor information of the input face image,
Figure BDA0001734295770000154
the feature vector of the jth key point of the input face image is obtained;
Figure BDA0001734295770000155
Figure BDA0001734295770000156
pijkas feature vectors
Figure BDA0001734295770000157
The k-dimensional element of (1), qjkAs feature vectors
Figure BDA0001734295770000158
J is more than or equal to 1 and less than or equal to 128 in the kth dimension element;
calculating the energy values of key points of the sample set and the input face image:
Figure BDA0001734295770000159
Figure BDA00017342957700001510
Eijis the energy value of the j-th key point of the ith sample, EijThe energy value of the j key point of the ith sample; ejThe energy value of the jth key point of the input face image is obtained;
for the j-th key point of the input face image, traversing all key points of the sample set, and searching in [ E ]j-w,Ej+w]Key points of the sample set within the range, the energy value is [ E ]j-w,Ej+w]The key points of the sample set in the range are recorded as the matching key points of the jth key point of the input face image; all key points of the input face image are respectively traversed through all key points of the sample set, and all obtained matched key points are recorded as target key points;
determining a target sample according to the target key point, wherein the target sample is part or all of the samples in the sample set, and descriptor information of the target sample is recorded as P';
finding out the best matching sample with the input face image from the target sample according to the Euclidean distance:
Figure BDA0001734295770000161
d is the minimum Euclidean distance value between the target sample and the input face image, WjThe weight value of the jth key point is a set value, and the jth key point of the sample set and the input face image has the same weight value; p'ajkAnd a is greater than or equal to 1 and less than or equal to m, m is the number of the target samples, m is greater than or equal to 1 and less than or equal to n, and the target sample corresponding to the minimum Euclidean distance value D is the best matching sample.
Of course, the storage medium containing the computer-executable instructions provided by the embodiments of the present invention is not limited to the above-described method operations, and may also perform related operations in the fast face recognition based method provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes instructions for enabling an electronic device (which may be a mobile phone, a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment based on the fast face recognition apparatus, the included units and modules are only divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be realized; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
The above embodiments are only preferred embodiments of the present invention, and the protection scope of the present invention is not limited thereby, and any insubstantial changes and substitutions made by those skilled in the art based on the present invention are within the protection scope of the present invention.

Claims (8)

1. A rapid face recognition method is characterized by comprising the following steps:
positioning key points by using an active appearance model to obtain a sample set and position information of an input face image;
the position information of the sample set is:
Xi=[xi1,xi2,...,xi68,yi1,yi2,...,yi68]T
the position information of the input face image is as follows:
Y=[x1,x2,...,x68,y1,y2,...,y68]T
wherein, XiI is more than or equal to 1 and less than or equal to n, n is the number of samples in the sample set, and Y is the position information of the input face image; x is the number ofijAnd yijThe j is more than or equal to 1 and less than or equal to 68, and x is the position information of the jth key point which jointly forms the ith samplejAnd yjThe position information of the jth key point of the input face image is formed together;
obtaining a 128-dimensional feature vector of each key point by using the gradient direction distribution characteristics of the neighborhood pixels of the key points so as to obtain a sample set and descriptor information of an input face image;
Figure FDA0003458608850000011
Figure FDA0003458608850000012
wherein, PiDescriptor information for the ith sample;
Figure FDA0003458608850000013
a feature vector of a j-th key point of an ith sample; q is descriptor information of the input face image,
Figure FDA0003458608850000014
the feature vector of the jth key point of the input face image is obtained;
Figure FDA0003458608850000015
Figure FDA0003458608850000016
pijkas feature vectors
Figure FDA0003458608850000017
The k-dimensional element of (1), qjkAs feature vectors
Figure FDA0003458608850000018
J is more than or equal to 1 and less than or equal to 128 in the kth dimension element;
calculating the energy values of key points of the sample set and the input face image:
Figure FDA0003458608850000021
Figure FDA0003458608850000022
Eijthe energy value of the j key point of the ith sample; ejThe energy value of the jth key point of the input face image is obtained;
for the j-th key point of the input face image, traversing all key points of the sample set, and searching in [ E ]j-w,Ej+w]Key points of the sample set within the range, the energy value is [ E ]j-w,Ej+w]The key points of the sample set in the range are recorded as the matching key points of the jth key point of the input face image; all key points of the input face image are respectively traversed through all key points of the sample set, all obtained matched key points are recorded as target key points, wherein w is a deviation set value;
determining a target sample according to the target key point, wherein the target sample is part or all of the samples in the sample set, and descriptor information of the target sample is recorded as P';
finding out the best matching sample with the input face image from the target sample according to the Euclidean distance:
Figure FDA0003458608850000023
d is the minimum Euclidean distance value between the target sample and the input face image, WjThe weight value of the jth key point is a set value, and the jth key point of the sample set and the input face image has the same weight value; p'ajkAnd a is greater than or equal to 1 and less than or equal to m, m is the number of the target samples, m is greater than or equal to 1 and less than or equal to n, and the target sample corresponding to the minimum Euclidean distance value D is the best matching sample.
2. The fast face recognition method as claimed in claim 1, wherein said determining target samples according to target key points comprises:
and taking the samples in all the sample sets corresponding to the target key points as target samples.
3. The fast face recognition method as claimed in claim 1, wherein said determining target samples according to target key points comprises:
all the key points corresponding to the target sample are located in the target key points.
4. The fast face recognition method as claimed in claim 1, wherein said determining target samples according to target key points comprises:
and the number of the key points corresponding to the target sample positioned at the target key points reaches a preset number.
5. The fast face recognition method of claim 1, further comprising:
and comparing the minimum Euclidean distance value D with a preset threshold, if the minimum Euclidean distance value D is larger than the preset threshold, the matching is unsuccessful, the best matching sample corresponding to the input face image does not exist in the sample set, and if the minimum Euclidean distance value D is not larger than the preset threshold, the target sample corresponding to the minimum Euclidean distance value D is the best matching sample.
6. A fast face recognition apparatus, comprising:
the key point positioning module is used for positioning key points by utilizing the active appearance model to acquire the sample set and the position information of the input face image;
the position information of the sample set is:
Xi=[xi1,xi2,...,xi68,yi1,yi2,...,yi68]T
the position information of the input face image is as follows:
Y=[x1,x2,...,x68,y1,y2,...,y68]T
wherein, XiI is more than or equal to 1 and less than or equal to n, n is the number of samples in the sample set, and Y is the position information of the input face image; x is the number ofijAnd yijThe j is more than or equal to 1 and less than or equal to 68, and x is the position information of the jth key point which jointly forms the ith samplejAnd yjThe position information of the jth key point of the input face image is formed together;
the descriptor information acquisition module is used for acquiring a 128-dimensional feature vector of each key point by utilizing the gradient direction distribution characteristics of the neighborhood pixels of the key points so as to obtain a sample set and descriptor information of the input face image;
Figure FDA0003458608850000031
Figure FDA0003458608850000041
wherein, PiDescriptor information for the ith sample;
Figure FDA0003458608850000042
a feature vector of a j-th key point of an ith sample; q is descriptor information of the input face image,
Figure FDA0003458608850000043
the feature vector of the jth key point of the input face image is obtained;
Figure FDA0003458608850000044
Figure FDA0003458608850000045
pijkas feature vectors
Figure FDA0003458608850000046
The k-dimensional element of (1), qjkAs feature vectors
Figure FDA0003458608850000047
J is more than or equal to 1 and less than or equal to 128 in the kth dimension element;
the energy calculation module is used for calculating the energy values of key points of the sample set and the input face image:
Figure FDA0003458608850000048
Figure FDA0003458608850000049
Eijthe energy value of the j key point of the ith sample; ejThe energy value of the jth key point of the input face image is obtained;
a traversing module for traversing all the key points of the sample set for the jth key point of the input face image and searching in [ E ]j-w,Ej+w]Key points of the sample set within the range, the energy value is [ E ]j-w,Ej+w]The key points of the sample set in the range are recorded as the matching key points of the jth key point of the input face image; all key points of the input face image are respectively traversed through all key points of the sample set, all obtained matched key points are recorded as target key points, wherein w is a deviation set value;
the target sample determining module is used for determining a target sample according to the target key point, wherein the target sample is part or all of the samples in the sample set, and the descriptor information of the target sample is marked as P';
the matching module is used for finding out the best matching sample with the input face image from the target samples according to the Euclidean distance:
Figure FDA0003458608850000051
d is the minimum Euclidean distance value between the target sample and the input face image, WjThe weight value of the jth key point is a set value, and the jth key point of the sample set and the input face image has the same weight value; p'ajkAnd a is greater than or equal to 1 and less than or equal to m, m is the number of the target samples, m is greater than or equal to 1 and less than or equal to n, and the target sample corresponding to the minimum Euclidean distance value D is the best matching sample.
7. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the fast face recognition method of any one of claims 1-5.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the fast face recognition method according to any one of claims 1 to 5.
CN201810788722.6A 2018-07-18 2018-07-18 Rapid face recognition method and device, electronic equipment and storage medium Active CN109145737B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810788722.6A CN109145737B (en) 2018-07-18 2018-07-18 Rapid face recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810788722.6A CN109145737B (en) 2018-07-18 2018-07-18 Rapid face recognition method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109145737A CN109145737A (en) 2019-01-04
CN109145737B true CN109145737B (en) 2022-04-15

Family

ID=64800978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810788722.6A Active CN109145737B (en) 2018-07-18 2018-07-18 Rapid face recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109145737B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111222448B (en) * 2019-12-31 2023-05-12 深圳云天励飞技术有限公司 Image conversion method and related product

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243374A (en) * 2015-11-02 2016-01-13 湖南拓视觉信息技术有限公司 Three-dimensional human face recognition method and system, and data processing device applying same
CN107463865A (en) * 2016-06-02 2017-12-12 北京陌上花科技有限公司 Face datection model training method, method for detecting human face and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243374A (en) * 2015-11-02 2016-01-13 湖南拓视觉信息技术有限公司 Three-dimensional human face recognition method and system, and data processing device applying same
CN107463865A (en) * 2016-06-02 2017-12-12 北京陌上花科技有限公司 Face datection model training method, method for detecting human face and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Facial Expression Recognition based on Active Appearance Model & Scale-Invariant Feature Transform;Zhong Huang and Fuji Ren;《Proceedings of the 2013 IEEE/SICE International Symposium on System Integration》;20131217;第94-99页 *
基于异方差PLDA 的外观流形建模视频人脸识别;孙伟强;《视频应用于工程》;20141231;第38卷(第9期);第218-222页 *

Also Published As

Publication number Publication date
CN109145737A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN109558764B (en) Face recognition method and device and computer equipment
CN107506717B (en) Face recognition method based on depth transformation learning in unconstrained scene
Chen et al. An end-to-end system for unconstrained face verification with deep convolutional neural networks
CN106897675B (en) Face living body detection method combining binocular vision depth characteristic and apparent characteristic
US11403874B2 (en) Virtual avatar generation method and apparatus for generating virtual avatar including user selected face property, and storage medium
Sagonas et al. 300 faces in-the-wild challenge: The first facial landmark localization challenge
KR100714724B1 (en) Apparatus and method for estimating facial pose, and face recognition system by the method
KR101781358B1 (en) Personal Identification System And Method By Face Recognition In Digital Image
CN103577815A (en) Face alignment method and system
CN107066969A (en) A kind of face identification method
CN101131728A (en) Face shape matching method based on Shape Context
CN111178252A (en) Multi-feature fusion identity recognition method
CN112686191B (en) Living body anti-counterfeiting method, system, terminal and medium based on three-dimensional information of human face
CN111310720A (en) Pedestrian re-identification method and system based on graph metric learning
CN111582027A (en) Identity authentication method and device, computer equipment and storage medium
CN110969101A (en) Face detection and tracking method based on HOG and feature descriptor
Sharmila et al. Automatic Attendance System based on FaceRecognition using Machine Learning
Tathe et al. Human face detection and recognition in videos
CN109145737B (en) Rapid face recognition method and device, electronic equipment and storage medium
JP2013218605A (en) Image recognition device, image recognition method, and program
Karunakar et al. Smart attendance monitoring system (sams): A face recognition based attendance system for classroom environment
Wei et al. Omni-face detection for video/image content description
Huang et al. Improving keypoint matching using a landmark-based image representation
KR20160042646A (en) Method of Recognizing Faces
CN113947781A (en) Lost child identification method, lost child identification system, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240509

Address after: 453000 HII1201-217 (2109), torch Park, No. 1789, Xinfei Avenue, Xinxiang new high tech Zone, Henan.

Patentee after: Henan plain public intellectual property operation and Management Co.,Ltd.

Country or region after: China

Address before: 453003 No. 601 Jinsui Avenue, Hongqi District, Xinxiang City, Henan Province

Patentee before: XINXIANG MEDICAL University

Country or region before: China