CN112149479A - Face recognition method, storage medium and electronic device - Google Patents

Face recognition method, storage medium and electronic device Download PDF

Info

Publication number
CN112149479A
CN112149479A CN201910578727.0A CN201910578727A CN112149479A CN 112149479 A CN112149479 A CN 112149479A CN 201910578727 A CN201910578727 A CN 201910578727A CN 112149479 A CN112149479 A CN 112149479A
Authority
CN
China
Prior art keywords
face
similarity
group
data
faces
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910578727.0A
Other languages
Chinese (zh)
Inventor
刘若鹏
栾琳
季春霖
刘竹明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Guangqi Intelligent Technology Co ltd
Original Assignee
Xi'an Guangqi Future Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Guangqi Future Technology Research Institute filed Critical Xi'an Guangqi Future Technology Research Institute
Priority to CN201910578727.0A priority Critical patent/CN112149479A/en
Publication of CN112149479A publication Critical patent/CN112149479A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The invention provides a face recognition method, a storage medium and an electronic device; wherein, the method comprises the following steps: acquiring the similarity of the same face attribute in the first group of face data and the second group of face data; wherein the first set of face data and the second set of face data respectively comprise face data of a plurality of persons; each face comprises a plurality of face attributes; determining the similarity between every two faces in the first group of face data and the second group of face data based on the similarity of the face attributes; and selecting one or more pairs of faces with the similarity of the faces meeting preset conditions from the first group of face data and the second group of face data. By the method and the device, the problem of low face recognition efficiency in the related technology is solved, and the effect of improving the face recognition efficiency is achieved.

Description

Face recognition method, storage medium and electronic device
Technical Field
The present invention relates to the field of computers, and in particular, to a method, a storage medium, and an electronic device for face recognition.
Background
In the prior art, the methods for face recognition include: (1) the method is implemented by using a digraph classical algorithm; among the classical algorithms of the bipartite graph are the hungarian algorithm, the bulldozer algorithm and the KM algorithm. (2) The Euclidean distance comparison of the face feature values is used as a basis for judging the similarity, that is, the similarity is judged by using the static data distance. (3) And carrying out selection matching by using a first matching principle.
However, the above-mentioned prior art face recognition method has the following disadvantages: the Hungarian algorithm, the bulldozer algorithm and the KM algorithm are all realized by adopting serial calculation, and a plurality of groups of data cannot be calculated at the same time, namely the calculation efficiency is low. In addition, the accuracy of the similarity in the related art depends on a single data source, and the space-time and environmental information is ignored. That is, the euclidean distance of the face feature values is used to compare whether the faces are similar, and the feature values are static data, i.e. static data comparison; in addition, the face characteristic value is obtained through deep learning, and certain errors exist in the face characteristic value; the time, space and environment information of the face appearing in the video are discarded, and the environment information is dynamic information, such as the position information of the face appearing in the video, the position relationship and motion relationship between people, the light factor of the face, the self-space information of the face, the time information acquired by the face and the like.
In view of the above problems in the related art, no effective solution exists at present.
Disclosure of Invention
The embodiment of the invention provides a face recognition method, a storage medium and an electronic device, which at least solve the problem of low face recognition efficiency in the related art.
According to an embodiment of the present invention, there is provided a face recognition method, including: acquiring the similarity of the same face attribute in the first group of face data and the second group of face data; wherein the first set of face data and the second set of face data respectively comprise face data of a plurality of persons; each face comprises a plurality of face attributes; determining the similarity between every two faces in the first group of face data and the second group of face data based on the similarity of the face attributes; and selecting one or more pairs of faces with the similarity of the faces meeting preset conditions from the first group of face data and the second group of face data.
According to a further embodiment of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, the similarity of the same face attribute in the first group of face data and the second group of face data is obtained, the similarity between every two faces in the first group of face data and the second group of face data is further determined based on the similarity of the face attributes, and finally one or more pairs of faces with the similarity meeting the preset conditions are selected from the first group of face data and the second group of face data.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware structure of a terminal of a face recognition method according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a method of face recognition according to an embodiment of the present invention;
fig. 3 is a block diagram of a face recognition apparatus according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Example 1
The method provided by the first embodiment of the present application may be executed in a terminal, a computer terminal, or a similar computing device. Taking the operation on the terminal as an example, fig. 1 is a hardware structure block diagram of the terminal of the method for face recognition according to the embodiment of the present invention. As shown in fig. 1, the terminal 10 may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the terminal. For example, the terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store computer programs, for example, software programs and modules of application software, such as computer programs corresponding to the method for face recognition in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In this embodiment, a method for face recognition running on the above terminal is provided, and fig. 2 is a flowchart of a method for face recognition according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, obtaining the similarity of the same face attribute in the first group of face data and the second group of face data; the first group of face data and the second group of face data respectively comprise face data of a plurality of persons; each face comprises a plurality of face attributes;
step S204, determining the similarity between every two faces in the first group of face data and the second group of face data based on the similarity of the face attributes;
step S206, one or more pairs of faces with the similarity of the faces meeting the preset conditions are selected from the first group of face data and the second group of face data.
Through the steps S202 to S206, the similarity of the same face attribute in the first group of face data and the second group of face data is obtained, the similarity between every two faces in the first group of face data and the second group of face data is determined based on the similarity of the face attributes, and finally, one or more pairs of faces with the similarity meeting the preset conditions are selected from the first group of face data and the second group of face data.
It should be noted that the facial attributes referred to in this application are at least: face image feature values (primary feature values and secondary feature values); length and width dimensions of the face image; the definition of the face image; coordinates of the face image in the frame; temporal characteristics of the appearance of the face image.
In an optional implementation manner of this embodiment, the manner of obtaining the similarity of the same face attribute in the first group of face data and the same face attribute in the second group of face data in step S202 in this embodiment may be implemented by the following manner:
step S202-11, grouping a plurality of face attributes on the face;
and step S202-12, calculating the similarity of the face attributes by adopting the corresponding functions for the grouped face attributes.
For the above step S202-11 and step S202-12, in a specific application scenario, the function is not a single function, but is composed of a series of functions, and specifically, the process of the above step S202 may be:
first, the data of the marked face is decomposed: the data of each human face is divided into characteristic values and attribute values, and if the total number of the data is k, the data is marked as t1、t2、t3....tk-1、tk
Assuming that the first face is A and the second face is B, each of them has an attribute of At1、At2、At3....Atk-1、Atk、Bt1、Bt2、Bt3....Btk-1、Btk
Further, the attribute similarity is calculated: the face a and the face B calculate the similarity according to each attribute. The formula is as follows:
T1、T2、T3....Tk-1、Tkrespectively representing the similarity value of each attribute. It is calculated using a similarity function. The calculation formula is as follows:
T1=S1(At1,Bt1)
T2=S2(At2,Bt2)
T3=S3(At3,Bt3)
........
Tk-1=Sk-1(Atk-1,Btk-1)
Tk=Sk(Atk,Btk)
Ti=Si(Ati,Bti) For similarity calculation formula, TiA similarity value for each attribute. Wherein each SiThe functions are attribute similarity functions, which are preferably different in the present application, but may be the same, depending on the case of the attribute decomposition used.
The above step S202 is exemplified below with reference to the specific embodiment of the present application, where the similarity of the face attributes in the specific embodiment includes: similarity of the primary face characteristic values, similarity of the secondary face characteristic values and similarity of the head portrait sizes;
calculating the similarity of the main face characteristic values: euclidean distance calculations are used.
For example: main face characteristic value a ═ x1,x2,x3,...,xn]The main face feature value B ═ y1,y2,y3,...,yn];
Then: the similarity (euclidean distance) between the eigenvalue a and the eigenvalue B is:
Figure BDA0002112673810000061
calculating the similarity of the secondary face characteristic values:
suppose that: secondary face feature value a ═ x1,x2,x3,...,xn]The secondary face feature value B ═ y1,y2,y3,...,yn]
Then: the similarity (cosine distance) between the secondary face feature value a and the secondary face feature value B is:
Figure BDA0002112673810000062
head portrait size similarity calculation:
suppose that: the upper left corner coordinate of the head portrait A is (x)1,y1) The coordinate of the lower right corner is (x)2,y2) The coordinate of the upper left corner of the head portrait B is (x)3,y3) The coordinate of the lower right corner is (x)4,y4)
Then: the head portrait size similarity is:
Figure BDA0002112673810000063
in another optional implementation manner of this embodiment, the manner of determining the similarity between two faces in the first group of face data and the second group of face data based on the similarity of the face attributes involved in step S204 of this embodiment may be implemented as follows:
step S204-11, calculating a first similarity score of the face attribute similarity according to a preset first function;
and step S204-12, determining a second similarity score of the similarity between every two faces in the first group of face data and the second group of face data according to the first similarity score.
For the above step S204-11 and step S204-12, the following similarity scoring function is adopted in a specific application scenario to implement:
Figure BDA0002112673810000071
wherein, the Score is the Score of similarity between two faces, fiCalculating function for each attribute, wherein the function can be the same or different, and the function is based on the primary and secondary weight of the attributeStatistical selection of the essential empirical values is made for different calculation functions. Note that T isiThe attribute similarity calculated above.
Preferably, the chi-square distribution function is selected as the similarity score calculation function f in the present applicationiThat is, it is preferable in the present application that each attribute similarity score calculation function is the same.
In another optional implementation manner of this embodiment, the manner of selecting one or more pairs of faces, of which the similarity of faces satisfies the preset condition, from the first group of face data and the second group of face data, involved in step S206 of this embodiment may be implemented by:
s206-11, screening out a third similarity score which is larger than a preset threshold value from the second similarity scores;
and S206-12, selecting one or more fourth similarity scores meeting preset conditions from the screened third similarity scores based on a selection function, and taking results corresponding to the one or more fourth similarity scores as one or more selected pairs of faces.
Wherein, the step S206-12 can be implemented by:
step S1, taking the screened third similarity score and the eliminated score as elements of a first matrix, wherein the value of the eliminated score in the first matrix is zero;
step S2, selecting the optimal element from the first matrix based on the selection function,
step S3, deleting all data of the columns and rows corresponding to the optimal elements from the first matrix, and composing a new first matrix from the remaining elements;
step S4, repeating steps S2 and S3 until all elements in the latest matrix are deleted or the optimal element cannot be selected based on the selection function;
step S5, taking the results screened in steps S2 to S4 as one or more fourth similarity scores satisfying the preset condition, and taking the results corresponding to the one or more fourth similarity scores as the selected one or more pairs of faces.
For the above step S206-11 and step S206-12, in a specific embodiment, it may be:
firstly, determining a similarity scoring matrix;
the face number of the A group is n +1, the face number of the B group is m +1, and the face number of the B group is 0 to m. Two groups of faces are crossed pairwise to calculate face similarity scores, and the calculation results are expressed by using a matrix as follows:
Figure BDA0002112673810000081
wherein, Scorei,jAnd the similarity scores of the A group of faces with the number i and the B group of faces with the number j are shown.
Similarity score matrix threshold filter function
Figure BDA0002112673810000082
And f (w)ij)>0
Wherein a is a threshold value. W2The matrix after threshold filtering (dimensions and W matrix are the same), i.e. the new similarity score matrix.
Global optimal similarity score selection logic
Simi=0→min(n,m)=F(V(n-i)(m-i))
When i is 0, Vnm=W2
Description of the formula:
function F () is the optimum selection function, SimiIs the ith matched face in the A group of faces and the B group of faces. SimiThe number is between 0 and min (n, m), and may be smaller than min (n, m) in practice, i.e. the a-group face and the B-group face may not be similar at all, and at most, only min (n, m) may be similar to the face. .
It should be noted that the optimal selection function mainly includes the logic:
first round, using selection function pair matrixVnmSelecting and screening the elements (data) in the same round to obtain the elements with subscripts i and j as the optimal elements selected in the same round, and recording the optimal elements as Sim0Deleting the ith row and jth column data in the matrix, and changing the matrix into V(n-1)(m-1)I.e., row n-1 and column m-1.
The second round is with V(n-1)(m-1)The matrix is used to screen an optimal element using the logic of step S2, denoted as Sim1To obtain a new matrix V(n-2)(m-2)
And repeating the steps of the first round and the second round until all elements (data) in the matrix are deleted or the optimal elements cannot be screened, and ending the selection process.
The elements screened each time correspond to VnmAnd the corresponding A group of faces and B group of faces are similar faces. If k elements are selected, then they are Sim0、Sim1、Sim2...SimkI.e., K is similar to a face and the other faces are dissimilar faces.
According to the method, the problems of calculation efficiency, accuracy and matching selection speed of cross matching of two groups of faces (the two groups of faces are head portraits in two adjacent frames respectively) in the face recognition process are solved.
In addition, the face data are converted into the matrix through the algorithm, subsequent calculation is carried out in the matrix, the matrix operation advantages of modern hardware and a parallel calculation framework are fully utilized, and the calculation efficiency is improved. In addition, the similarity of the two groups of faces is calculated by using a parallel algorithm, and the parallel capability of modern hardware is utilized to improve the calculation efficiency. And selecting the best matching data from the face similarity set matched in a pairwise crossing manner by using a parallel algorithm, so that the calculation efficiency is improved.
In addition, the face similarity calculation is realized by using the face characteristic value, the definition of the face, the coordinates of the face in the video, the time sequence of the face, the moving direction of the face in the video, the size of the face and other factors. Because the contribution rates of the factors are different when the similarity is determined, a certain adjusting factor is set for each factor, the convergence of the distribution range of the similarity of the same person is realized by adjusting the adjusting factor, the mutual coverage of the similarity ranges of different faces is avoided, and the distinctiveness of the similarity is improved, namely the precision of the similarity is improved.
Finally, the process of selecting the best matching data in the application uses the accelerated matching logic, and the matching speed is improved. Because the calculated similarity data is an n-m matrix, once a certain data in the matrix is selected, the data in the matrix row corresponding to the data does not participate in the subsequent optimal matching process any more, so that the subsequent matching process reduces the calculated amount and improves the matching speed.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
In this embodiment, a face recognition apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and details are not repeated after the description is given. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 3 is a block diagram of a face recognition apparatus according to an embodiment of the present invention, and as shown in fig. 3, the apparatus includes: an obtaining module 32, configured to obtain similarity between the same face attribute in the first group of face data and the same face attribute in the second group of face data; the first group of face data and the second group of face data respectively comprise face data of a plurality of persons; each face comprises a plurality of face attributes; the determining module 34 is coupled to the obtaining module 32 and configured to determine similarity between two faces in the first group of face data and the second group of face data based on the similarity of the face attributes; and the selecting module 36 is coupled to the determining module 34 and configured to select one or more pairs of faces from the first group of face data and the second group of face data, where the similarity of the faces meets a preset condition.
Optionally, the obtaining module 32 in this embodiment further includes: the grouping unit is used for grouping a plurality of face attributes on the face; and the first calculating unit is used for calculating the similarity of the face attributes by adopting the corresponding functions for the grouped face attributes.
Optionally, the determining module 34 in this embodiment further includes: the second calculation unit is used for calculating a first similarity score of the face attribute similarity according to a preset first function; and the determining unit is used for determining a second similarity score of the similarity between every two faces in the first group of face data and the second group of face data according to the first similarity score.
Optionally, the selecting module 36 in this embodiment further includes: the screening unit is used for screening out a third similarity score which is larger than a preset threshold value from the second similarity score; and the selecting unit is used for selecting one or more fourth similarity scores meeting preset conditions from the screened third similarity scores based on a selection function, and taking results corresponding to the one or more fourth similarity scores as one or more selected pairs of faces.
Wherein the selection unit is configured to perform the following steps:
step S1, taking the screened third similarity score and the eliminated score as elements of a first matrix, wherein the value of the eliminated score in the first matrix is zero;
step S2, selecting the optimal element from the first matrix based on the selection function,
step S3, deleting all data of the columns and rows corresponding to the optimal elements from the first matrix, and composing a new first matrix from the remaining elements;
step S4, repeating steps S2 and S3 until all elements in the latest matrix are deleted or the optimal element cannot be selected based on the selection function;
step S5, taking the results screened in steps S2 to S4 as one or more fourth similarity scores satisfying the preset condition, and taking the results corresponding to the one or more fourth similarity scores as the selected one or more pairs of faces.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Example 3
Embodiments of the present invention also provide a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, obtaining the similarity of the same face attribute in the first group of face data and the second group of face data; the first group of face data and the second group of face data respectively comprise face data of a plurality of persons; each face comprises a plurality of face attributes;
s2, determining the similarity between every two faces in the first group of face data and the second group of face data based on the similarity of the face attributes;
and S3, selecting one or more pairs of faces with the similarity meeting the preset conditions from the first group of face data and the second group of face data.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, obtaining the similarity of the same face attribute in the first group of face data and the second group of face data; the first group of face data and the second group of face data respectively comprise face data of a plurality of persons; each face comprises a plurality of face attributes;
s2, determining the similarity between every two faces in the first group of face data and the second group of face data based on the similarity of the face attributes;
and S3, selecting one or more pairs of faces with the similarity meeting the preset conditions from the first group of face data and the second group of face data.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method of face recognition, comprising:
acquiring the similarity of the same face attribute in the first group of face data and the second group of face data; wherein the first set of face data and the second set of face data respectively comprise face data of a plurality of persons; wherein the first set of face data and the second set of face data comprise a plurality of face images, each face comprising a plurality of face attributes;
determining the similarity between every two faces in the first group of face data and the second group of face data based on the similarity of the face attributes;
and selecting one or more pairs of faces with the similarity of the faces meeting preset conditions from the first group of face data and the second group of face data.
2. The method of claim 1, wherein the obtaining the similarity between the same face attribute in the first set of face data and the same face attribute in the second set of face data comprises:
grouping a plurality of face attributes on a face;
and calculating the similarity of the face attributes by adopting the corresponding functions according to the grouped face attributes.
3. The method according to claim 1 or 2, wherein the determining the similarity between two faces in the first set of face data and the second set of face data based on the similarity of the face attributes comprises:
calculating a first similarity score of the face attribute similarity according to a preset first function;
and determining a second similarity score of the similarity between every two faces in the first group of face data and the second group of face data according to the first similarity score.
4. The method according to claim 3, wherein the selecting one or more pairs of faces from the first group of face data and the second group of face data, the similarity of which satisfies a preset condition, comprises:
screening out a third similarity score which is larger than a preset threshold value from the second similarity scores;
and selecting one or more fourth similarity scores meeting preset conditions from the screened third similarity scores based on a selection function, and taking results corresponding to the one or more fourth similarity scores as the selected one or more pairs of faces.
5. The method according to claim 4, wherein the selecting one or more fourth similarity scores from the screened third similarity scores based on a selection function, and using the result corresponding to the one or more fourth similarity scores as the selected one or more pairs of faces comprises:
step S1, taking the screened third similarity score and the eliminated score as elements of a first matrix, wherein the eliminated score has a value of zero in the first matrix;
step S2, selecting the optimal element from the first matrix based on the selection function,
step S3, deleting all data of the columns and rows corresponding to the optimal elements from the first matrix, and forming a new first matrix from the remaining elements;
step S4, repeating steps S2 and S3 until all elements in the latest matrix are deleted or the optimal element cannot be selected based on the selection function;
step S5, taking the results screened in steps S2 to S4 as one or more fourth similarity scores satisfying preset conditions, and taking the results corresponding to the one or more fourth similarity scores as the selected one or more pairs of faces.
6. The method of any of claims 1 to 5, wherein the facial attributes include at least one of: the method comprises the following steps of face image characteristic value, face image length and width, face image definition, coordinates of the face image in a frame and appearance time characteristics of the face image.
7. The method of claim 3, wherein the similarity of the face attributes is calculated by using the grouped face attributes and corresponding functions according to the following formula:
Tk=Sk(Atk,Btk)
wherein S iskFor the function of face attribute similarity calculation, A and B are two faces to be calculated, AtkIs the kth individual face attribute, Bt, of the A facekThe kth personal face attribute, T, for the B facekAnd k is the similarity value of the k-th personal face attribute of the A face and the B face, and k is a positive integer.
8. The method of claim 7, wherein the similarity score is calculated by the formula:
Figure FDA0002112673800000031
wherein, the Score is the Score of similarity between two faces, fiA similarity score calculation function for the face attribute, TiA function for calculating the similarity of face attributes, i being a positive integer。
9. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 8 when executed.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 8.
CN201910578727.0A 2019-06-28 2019-06-28 Face recognition method, storage medium and electronic device Pending CN112149479A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910578727.0A CN112149479A (en) 2019-06-28 2019-06-28 Face recognition method, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910578727.0A CN112149479A (en) 2019-06-28 2019-06-28 Face recognition method, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN112149479A true CN112149479A (en) 2020-12-29

Family

ID=73891246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910578727.0A Pending CN112149479A (en) 2019-06-28 2019-06-28 Face recognition method, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN112149479A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2869239A2 (en) * 2013-11-04 2015-05-06 Facebook, Inc. Systems and methods for facial representation
CN105808732A (en) * 2016-03-10 2016-07-27 北京大学 Integration target attribute identification and precise retrieval method based on depth measurement learning
CN105989331A (en) * 2015-02-11 2016-10-05 佳能株式会社 Facial feature extraction apparatus, facial feature extraction method, image processing equipment and image processing method
CN106650653A (en) * 2016-12-14 2017-05-10 广东顺德中山大学卡内基梅隆大学国际联合研究院 Method for building deep learning based face recognition and age synthesis joint model
CN107292225A (en) * 2016-08-18 2017-10-24 北京师范大学珠海分校 A kind of face identification method
WO2017181769A1 (en) * 2016-04-21 2017-10-26 腾讯科技(深圳)有限公司 Facial recognition method, apparatus and system, device, and storage medium
CN108090433A (en) * 2017-12-12 2018-05-29 厦门集微科技有限公司 Face identification method and device, storage medium, processor
CN109117808A (en) * 2018-08-24 2019-01-01 深圳前海达闼云端智能科技有限公司 Face recognition method and device, electronic equipment and computer readable medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2869239A2 (en) * 2013-11-04 2015-05-06 Facebook, Inc. Systems and methods for facial representation
CN105989331A (en) * 2015-02-11 2016-10-05 佳能株式会社 Facial feature extraction apparatus, facial feature extraction method, image processing equipment and image processing method
CN105808732A (en) * 2016-03-10 2016-07-27 北京大学 Integration target attribute identification and precise retrieval method based on depth measurement learning
WO2017181769A1 (en) * 2016-04-21 2017-10-26 腾讯科技(深圳)有限公司 Facial recognition method, apparatus and system, device, and storage medium
CN107292225A (en) * 2016-08-18 2017-10-24 北京师范大学珠海分校 A kind of face identification method
CN106650653A (en) * 2016-12-14 2017-05-10 广东顺德中山大学卡内基梅隆大学国际联合研究院 Method for building deep learning based face recognition and age synthesis joint model
CN108090433A (en) * 2017-12-12 2018-05-29 厦门集微科技有限公司 Face identification method and device, storage medium, processor
CN109117808A (en) * 2018-08-24 2019-01-01 深圳前海达闼云端智能科技有限公司 Face recognition method and device, electronic equipment and computer readable medium

Similar Documents

Publication Publication Date Title
Theis et al. Faster gaze prediction with dense networks and fisher pruning
WO2021139309A1 (en) Method, apparatus and device for training facial recognition model, and storage medium
CN108256544B (en) Picture classification method and device, robot
Yuan et al. Factorization-based texture segmentation
US20190340510A1 (en) Sparsifying neural network models
WO2021022521A1 (en) Method for processing data, and method and device for training neural network model
CN109816009A (en) Multi-tag image classification method, device and equipment based on picture scroll product
CN111738357B (en) Junk picture identification method, device and equipment
CN109284749A (en) Refine image recognition
CN109754359B (en) Pooling processing method and system applied to convolutional neural network
US20160357845A1 (en) Method and Apparatus for Classifying Object Based on Social Networking Service, and Storage Medium
CN109784474A (en) A kind of deep learning model compression method, apparatus, storage medium and terminal device
CN112328715B (en) Visual positioning method, training method of related model, related device and equipment
CN112613581A (en) Image recognition method, system, computer equipment and storage medium
US9639598B2 (en) Large-scale data clustering with dynamic social context
US20230401833A1 (en) Method, computer device, and storage medium, for feature fusion model training and sample retrieval
WO2020211242A1 (en) Behavior recognition-based method, apparatus and storage medium
CN115018039A (en) Neural network distillation method, target detection method and device
CN111985597A (en) Model compression method and device
Mozejko et al. Superkernel neural architecture search for image denoising
CN112766421A (en) Face clustering method and device based on structure perception
CN112529068A (en) Multi-view image classification method, system, computer equipment and storage medium
CN111680664A (en) Face image age identification method, device and equipment
CN111598176A (en) Image matching processing method and device
Singh et al. Mesh classification with dilated mesh convolutions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20221205

Address after: 710000 second floor, building B3, yunhuigu, No. 156, Tiangu 8th Road, software new town, high tech Zone, Xi'an, Shaanxi

Applicant after: Xi'an Guangqi Intelligent Technology Co.,Ltd.

Address before: Second floor, B3, yunhuigu, 156 Tiangu 8th Road, software new town, Xi'an City, Shaanxi Province 710000

Applicant before: Xi'an Guangqi Future Technology Research Institute