CN111723647B - Path-based face recognition method and device, computer equipment and storage medium - Google Patents

Path-based face recognition method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111723647B
CN111723647B CN202010357554.2A CN202010357554A CN111723647B CN 111723647 B CN111723647 B CN 111723647B CN 202010357554 A CN202010357554 A CN 202010357554A CN 111723647 B CN111723647 B CN 111723647B
Authority
CN
China
Prior art keywords
face
recognized
matrix
descending
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010357554.2A
Other languages
Chinese (zh)
Other versions
CN111723647A (en
Inventor
高超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An International Smart City Technology Co Ltd
Original Assignee
Ping An International Smart City Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An International Smart City Technology Co Ltd filed Critical Ping An International Smart City Technology Co Ltd
Priority to CN202010357554.2A priority Critical patent/CN111723647B/en
Publication of CN111723647A publication Critical patent/CN111723647A/en
Application granted granted Critical
Publication of CN111723647B publication Critical patent/CN111723647B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application discloses a method, a device, computer equipment and a storage medium for face recognition based on a path, wherein the method comprises the following steps: acquiring an environment picture; calling n reference pictures; carrying out face feature vector extraction processing to obtain m face feature vectors to be recognized and n reference face feature vectors; calculating the similarity value of the face feature vector to be recognized and the reference face feature vector so as to obtain m multiplied by n similarity values; generating an mxn initial matrix; performing descending order arrangement on the initial matrix to obtain a descending order matrix; according to the formula: generating a descending index L, where L is argsort (max (a, axi is 1), axi is 0, and reverse is True); generating a path P in the descending matrix; according to the formula:
Figure DDA0002474004820000011
Linvgenerating a reduction path Q in a descending matrix, wherein the reduction path is argsort (L, reverse is False); acquiring the column numbers V1, V2, and the column numbers Vm of m nodes q1, q2, and q. And establishing a face corresponding relation. Therefore, the adaptive scene recognition is improved, the recognition accuracy is improved, and the recognition efficiency is integrally improved.

Description

Path-based face recognition method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computers, and in particular, to a method and an apparatus for path-based face recognition, a computer device, and a storage medium.
Background
The environmental face recognition means face recognition of a face in an unrecognized environmental picture to find out a prestored face picture identical to the face in a preset database, thereby achieving the purpose of face recognition. The traditional face recognition has low detection accuracy when a plurality of faces exist in an environment picture. Specifically, the conventional face recognition performs preliminary detection on an environment picture to obtain a plurality of faces, and performs recognition and matching on each face individually, so that not only is the recognition efficiency low, but also the image cannot be qualified for a special environment picture (for example, the environment picture includes more than two similar faces — for example, a twin, and the like, then the conventional face recognition method is likely to determine the faces as the same person, which is obviously wrong). Therefore, the traditional face recognition scheme has the defects of low recognition efficiency and low recognition accuracy.
Disclosure of Invention
The application mainly aims to provide a method, a device, a computer device and a storage medium for face recognition based on a path, and aims to improve the recognition accuracy and the recognition efficiency integrally.
In order to achieve the above object, the present application provides a method for recognizing a face based on a path, comprising the following steps:
acquiring an environment picture, wherein the environment picture comprises m faces to be recognized;
calling n reference pictures in a preset face database, wherein each reference picture comprises a face;
respectively carrying out face feature vector extraction processing on the m faces to be recognized and the n reference pictures so as to obtain m face feature vectors to be recognized corresponding to the m faces to be recognized and n reference face feature vectors corresponding to the n reference pictures;
according to a preset similarity calculation method, calculating similarity values of the face feature vector to be recognized and the reference face feature vector so as to obtain m multiplied by n similarity values;
generating an mxn initial matrix according to the mxn similarity values, wherein horizontal rows and vertical columns of each element in the mxn initial matrix respectively correspond to a face to be recognized and a reference picture;
performing descending order arrangement on the initial matrix to obtain a descending order matrix; and according to the formula: generating a descending index L, where L is argsort (max (a, axi is 1), axi is 0, and reverse is True); the descending order is based on the maximum value of the elements of each row, the argsort function returns the index value of the array value, max is the maximum function, a is the initial matrix, axis is 1 to represent that the corresponding function is executed on each row of the matrix, axis is 0 to represent that the corresponding function is executed on each column of the matrix, and reverse is True to represent the descending order;
generating a path P in the descending matrix, wherein the path P is composed of m nodes P1, P2, and p.a., pm, the nodes P1, P2, and p.a., pm are respectively positioned in a first row, a second row, and a m-th row in the descending matrix, the nodes P1, P2, and p.a., pm are respectively positioned in different columns in the descending matrix, and the sum of the numerical values of the nodes P1, P2, and p.a., pm is larger than the sum of the numerical values of the nodes in other paths;
according to the formula:
Figure BDA0002474004800000021
Linvgenerating a reduction path Q in the descending matrix, wherein the reduction path Q comprises m nodes Q1, Q2, a.
Acquiring column numbers V1, V2, and column numbers Vm of the m nodes q1, q2, and column numbers qm corresponding to the m nodes q2, and column numbers V.. And establishing a face corresponding relationship, wherein the face corresponding relationship is that the first face to be recognized corresponds to a reference picture represented by the line number V1, the second face to be recognized corresponds to a reference picture represented by the line number V2, and the mth face to be recognized corresponds to a reference picture represented by the line number Vm.
Further, the step of obtaining an environment picture, where the environment picture includes m faces to be recognized, includes:
acquiring a preset video with specified duration collected by a camera, wherein the video comprises m objects to be identified;
extracting t frames of pictures from the video, wherein any one of the frames of pictures comprises m objects to be identified;
respectively carrying out face cutting and face area detection on the t frame pictures so as to obtain m face sets to be recognized and t multiplied by m face areas, wherein each face set to be recognized comprises t faces, and each face set to be recognized corresponds to an object to be recognized;
extracting the faces with the largest area from the m face sets to be recognized respectively so as to obtain m faces with the largest area;
and replacing m human faces of a specified frame picture in the t frame pictures with the m human faces with the largest area, and recording the specified frame picture as an environment picture.
Further, the step of calculating the similarity value between the face feature vector to be recognized and the reference face feature vector according to a preset similarity calculation method includes:
according to the formula:
Figure BDA0002474004800000031
calculating the similarity value A of the face feature vector to be recognized and the reference face feature vectorFGWherein F is a face feature vector to be recognized, G is a reference face feature vector, and FiIs the ith component vector G of the face feature vector F to be recognizediIs the ith partial vector of the reference face feature vector G.
Further, the step of obtaining a descending matrix by descending the order of the initial matrix includes:
according to the formula:
Figure BDA0002474004800000032
normalizing the initial matrix to obtain a normalized matrix
Figure BDA0002474004800000033
Wherein
Figure BDA0002474004800000034
Is a normalized matrix
Figure BDA0002474004800000035
The element in the u-th row and the j-th column in the initial matrix is Huj which is a preset normalization coefficient, and Auj is the element in the u-th row and the j-th column in the initial matrix;
and performing descending order arrangement on the normalized matrix to obtain a descending order matrix.
Further, the step of establishing a face correspondence relationship, where the face correspondence relationship is that a first to-be-recognized face corresponds to a reference picture represented by a column number V1, and a second to-be-recognized face corresponds to a reference picture represented by a column number V2.
Acquiring m numerical values of corresponding positions of the m nodes q1, q2, q.
If the m numerical values are all larger than a preset similarity threshold, establishing a face corresponding relationship, wherein the face corresponding relationship is that the first face to be recognized corresponds to a reference picture represented by the column number V1, and the second face to be recognized corresponds to a reference picture represented by the column number V2.
The application provides a face recognition device based on route includes:
the environment picture acquiring unit is used for acquiring an environment picture, and the environment picture comprises m faces to be recognized;
the reference picture calling unit is used for calling n reference pictures in a preset face database, and each reference picture comprises a face;
a face feature vector extraction unit, configured to perform face feature vector extraction processing on the m faces to be recognized and the n reference pictures, respectively, so as to obtain m face feature vectors to be recognized corresponding to the m faces to be recognized and n reference face feature vectors corresponding to the n reference pictures;
the similarity value calculation unit is used for calculating the similarity value of the face feature vector to be recognized and the reference face feature vector according to a preset similarity calculation method so as to obtain mxn similarity values;
an initial matrix generating unit, configured to generate an mxn initial matrix according to the mxn similarity values, where a horizontal row and a vertical column of each element in the mxn initial matrix correspond to a face to be recognized and a reference picture, respectively;
the descending matrix obtaining unit is used for carrying out descending arrangement on the initial matrix to obtain a descending matrix; and according to the formula: generating a descending index L, where L is argsort (max (a, axi is 1), axi is 0, and reverse is True); the descending order is based on the maximum value of the elements of each row, the argsort function returns the index value of the array value, max is the maximum function, a is the initial matrix, axis is 1 to represent that the corresponding function is executed on each row of the matrix, axis is 0 to represent that the corresponding function is executed on each column of the matrix, and reverse is True to represent the descending order;
a path P generation unit, configured to generate a path P in the descending matrix, where the path P is composed of m nodes P1, P2, and p... and pm, the nodes P1, P2, and p.and pm are located in a first row, a second row, and a.m. row of the descending matrix, respectively, and the nodes P1, P2, and p.and pm are located in different columns of the descending matrix, respectively, and a sum of values of the nodes P1, P2, and p.m is greater than a sum of values of nodes in other paths;
a restoration path Q generating unit, configured to:
Figure BDA0002474004800000041
Linvgenerating a reduction path Q in the descending matrix, wherein the reduction path Q comprises m nodes Q1, Q2, a.
A face corresponding relation establishing unit, configured to obtain column numbers V1, V2, and a corresponding column number Vm of the m nodes q1, q2, a corresponding. And establishing a face corresponding relationship, wherein the face corresponding relationship is that the first face to be recognized corresponds to a reference picture represented by the line number V1, the second face to be recognized corresponds to a reference picture represented by the line number V2, and the mth face to be recognized corresponds to a reference picture represented by the line number Vm.
Further, the environment picture acquiring unit includes:
the device comprises a video acquisition subunit, a video recognition subunit and a recognition processing subunit, wherein the video acquisition subunit is used for acquiring a preset video with specified duration collected by a camera, and the video comprises m objects to be recognized;
the frame picture extracting subunit is used for extracting t frame pictures from the video, wherein any one of the t frame pictures comprises m objects to be identified;
a face area obtaining subunit, configured to perform face segmentation and face area detection on the t frame pictures, respectively, so as to obtain m to-be-recognized face sets and t × m face areas, where each to-be-recognized face set includes t faces, and each to-be-recognized face set corresponds to an object to be recognized;
the largest-area face extraction subunit is used for extracting the largest-area faces from the m face sets to be recognized respectively so as to obtain m faces with the largest areas;
and the environment picture marking subunit is used for replacing m human faces of a specified frame picture in the t frame pictures with the m human faces with the largest area and marking the specified frame picture as an environment picture.
Further, the similarity value calculation unit includes:
a similarity value calculating subunit, configured to:
Figure BDA0002474004800000051
calculating the similarity value A of the face feature vector to be recognized and the reference face feature vectorFGWherein F is a face feature vector to be recognized, G is a reference face feature vector, and FiIs the ith component vector G of the face feature vector F to be recognizediIs the ith partial vector of the reference face feature vector G.
The present application provides a computer device comprising a memory storing a computer program and a processor implementing the steps of any of the above methods when the processor executes the computer program.
The present application provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method of any of the above.
The method and the device for recognizing the face based on the path, the computer equipment and the storage medium acquire the environmental picture; calling n reference pictures in preset human face databaseSlicing; respectively carrying out face feature vector extraction processing on the m faces to be recognized and the n reference pictures so as to obtain m face feature vectors to be recognized and n reference face feature vectors; calculating the similarity value of the face feature vector to be recognized and the reference face feature vector so as to obtain m multiplied by n similarity values; generating an mxn initial matrix; performing descending order arrangement on the initial matrix to obtain a descending order matrix; according to the formula: generating a descending index L, where L is argsort (max (a, axi is 1), axi is 0, and reverse is True); generating a path P in the descending matrix; according to the formula:
Figure BDA0002474004800000061
Linvgenerating a reduction path Q in the descending matrix; acquiring column numbers V1, V2, and column numbers Vm of the m nodes q1, q2, and column numbers qm corresponding to the m nodes q2, and column numbers V.. And establishes a face correspondence. Therefore, the adaptive scene recognition is improved, the recognition accuracy is improved, and the recognition efficiency is integrally improved.
Acquiring an environment picture; calling n reference pictures in a preset human face database; respectively carrying out face feature vector extraction processing on the m faces to be recognized and the n reference pictures so as to obtain m face feature vectors to be recognized and n reference face feature vectors; calculating the similarity value of the face feature vector to be recognized and the reference face feature vector so as to obtain m multiplied by n similarity values; generating an mxn initial matrix; performing descending order arrangement on the initial matrix to obtain a descending order matrix; according to the formula: generating a descending index L, where L is argsort (max (a, axi is 1), axi is 0, and reverse is True); generating a path P in the descending matrix; according to the formula:
Figure BDA0002474004800000062
Linvgenerating a reduction path Q in the descending matrix; acquiring column numbers V1, V2, and column numbers Vm of the m nodes q1, q2, and column numbers qm corresponding to the m nodes q2, and column numbers V.. And establishes a face correspondence. Thereby improving the recognition of adaptive scenes and improving the recognitionThe accuracy and the integrity of the identification are improved.
Drawings
Fig. 1 is a schematic flow chart of a path-based face recognition method according to an embodiment of the present application;
fig. 2 is a schematic block diagram of a structure of a path-based face recognition apparatus according to an embodiment of the present application;
fig. 3 is a block diagram illustrating a structure of a computer device according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, an embodiment of the present application provides a method for face recognition based on a path, including the following steps:
s1, obtaining an environment picture, wherein the environment picture comprises m faces to be recognized;
s2, calling n reference pictures in a preset face database, wherein each reference picture comprises a face;
s3, respectively carrying out face feature vector extraction processing on the m faces to be recognized and the n reference pictures, thereby obtaining m face feature vectors to be recognized corresponding to the m faces to be recognized and obtaining n reference face feature vectors corresponding to the n reference pictures;
s4, calculating the similarity value of the face feature vector to be recognized and the reference face feature vector according to a preset similarity calculation method, so as to obtain m × n similarity values;
s5, generating an mxn initial matrix according to the mxn similarity values, wherein horizontal rows and vertical columns of each element in the mxn initial matrix respectively correspond to a face to be recognized and a reference picture;
s6, performing descending order arrangement on the initial matrix to obtain a descending order matrix; and according to the formula: generating a descending index L, where L is argsort (max (a, axi is 1), axi is 0, and reverse is True); the descending order is based on the maximum value of the elements of each row, the argsort function returns the index value of the array value, max is the maximum function, a is the initial matrix, axis is 1 to represent that the corresponding function is executed on each row of the matrix, axis is 0 to represent that the corresponding function is executed on each column of the matrix, and reverse is True to represent the descending order;
s7, generating a path P in the descending matrix, where the path P is composed of m nodes P1, P2,.. and pm, the nodes P1, P2,.. and pm are respectively located in a first row, a second row, the.. and an m-th row in the descending matrix, the nodes P1, P2, the.. and pm are respectively located in different columns in the descending matrix, and the sum of the values of the nodes P1, P2, the.. and pm is greater than the sum of the values of the nodes in other paths;
s8, according to the formula:
Figure BDA0002474004800000071
Linvgenerating a reduction path Q in the descending matrix, wherein the reduction path Q comprises m nodes Q1, Q2, a.
S9, obtaining column numbers V1, V2, and Vm corresponding to the m nodes q1, q2, and q.a. and qm in the descending matrix respectively; and establishing a face corresponding relationship, wherein the face corresponding relationship is that the first face to be recognized corresponds to a reference picture represented by the line number V1, the second face to be recognized corresponds to a reference picture represented by the line number V2, and the mth face to be recognized corresponds to a reference picture represented by the line number Vm.
According to the face identification method and device, the face feature vectors are identified and similarity calculation is carried out, then a matrix associated with the similarity is generated, then descending processing is carried out, descending indexes are obtained, paths and restoration paths are generated, and the face corresponding relation is obtained, so that simultaneous identification of multiple faces is achieved. In the process, the characteristic that different faces in the same environment picture belong to different people is utilized (namely, the reaction is that the nodes p1, p2, and pm are respectively positioned in different columns in the descending matrix), so that the process can be competent for identifying similar faces, the identification adaptive scene is improved, and the identification accuracy is improved. And the identification accuracy is integrally improved due to the adoption of an integral judgment strategy (namely, the reaction is that the sum of the numerical values of the nodes p1, p 2.. and pm is larger than that of the numerical values of the nodes in other paths). And because the face corresponding relation is constructed simultaneously, the recognition efficiency is integrally improved.
As described in step S1, an environment picture is obtained, where the environment picture includes m faces to be recognized. The environment picture may be a picture in any form, for example, a picture acquired by a preset camera, or a picture of a certain frame in a video acquired by the camera. One feature of the ambient picture is that there are m faces to be recognized. Further, m is an integer greater than 2. Further, the m faces to be recognized belong to different natural persons respectively.
As described in step S2, n reference pictures in the preset face database are retrieved, and each reference picture includes a face. The face database refers to a database in which a plurality of reference pictures are prestored, each reference picture comprises a face, and each face corresponds to different natural persons. The purpose of face recognition is to find a natural person to which a face to be recognized belongs in the environment picture, that is, to find a reference picture corresponding to the face to be recognized.
As described in step S3, the m faces to be recognized and the n reference pictures are respectively subjected to face feature vector extraction processing, so as to obtain m face feature vectors to be recognized corresponding to the m faces to be recognized and n reference face feature vectors corresponding to the n reference pictures. The face feature vector is a vector reflecting face features, such as eyebrow-eye distance, eye width and the like, and is mapped into numerical values and respectively used as component vectors, so that the face feature vector is constructed. Therefore, after a face is subjected to face feature vector extraction, a face feature vector can be obtained. The face feature vector extraction process can be performed in any feasible manner, and is not described herein again.
As described in step S4, according to a preset similarity calculation method, the similarity value between the to-be-recognized face feature vector and the reference face feature vector is calculated, so as to obtain m × n similarity values. The similarity calculation method may adopt any feasible method, such as a cosine similarity calculation method. Because the similarity calculation method is the similarity value between one face feature vector to be recognized and one reference face feature vector, m multiplied by n similarity values can be finally obtained for m face feature vectors to be recognized and n reference face feature vectors.
As described in step S5, an m × n initial matrix is generated according to the m × n similarity values, wherein horizontal rows and vertical columns of each element in the m × n initial matrix correspond to a face to be recognized and a reference picture, respectively. The m × n initial matrix refers to a matrix of m rows and n columns, each row corresponds to one face to be recognized, and each column corresponds to one reference picture, so that the numerical value of any element in the initial matrix is the similarity value between the corresponding face to be recognized and the reference picture. Therefore, synchronous simultaneous recognition of multiple faces is possible.
As described in step S6, the initial matrices are sorted in descending order to obtain descending matrices; and according to the formula: generating a descending index L, where L is argsort (max (a, axi is 1), axi is 0, and reverse is True); the descending order is based on the maximum value of the elements of each row, the argsort function returns the index value of the array value, max is the maximum function, a is the initial matrix, axis-1 represents that the corresponding function is executed on each row of the matrix, axis-0 represents that the corresponding function is executed on each column of the matrix, and reverse-True represents the descending order. For ease of understanding, the present application is explained herein with reference to a third order matrix. For example, for an initial third-order matrix (the maximum values of the elements in this matrix are actually more similar, but in order to show the application more vividly, they are designed artificiallyFor ease of understanding):
Figure BDA0002474004800000091
after descending order arrangement, a descending order matrix is obtained
Figure BDA0002474004800000092
Then according to the formula: and generating a descending index L ═ argsort (max (a, axis ═ 1), axis ═ 0, reverse ═ True), wherein 1 represents an index value of a first row of the descending matrix, 2 represents an index value of a second row of the descending matrix, and 3 represents an index value of a third row of the descending matrix, and (1, 2) (or (0,1,2) differs depending on whether the first index value is 0 or 1, but does not affect the implementation of the present application).
As described in the above step S7, a path P is generated in the descending matrix, the path P is composed of m nodes P1, P2, and P, the nodes P1, P2, and P, are located in the first row, the second row, and the m row of the descending matrix, respectively, and the nodes P1, P2, and P are located in different columns of the descending matrix, respectively, and the sum of the values of the nodes P1, P2, and P. Still in descending order
Figure BDA0002474004800000093
By way of illustration, the path P is generated in the descending matrix, wherein the P1 node has a value of 0.75 (it is necessary to satisfy the maximum sum of all node values, and therefore cannot be 0.9), and is located in one row and one column, the P2 node has a value of 0.65, and is located in two rows and three columns, and the P3 node has a value of 0.25, and is located in three rows and two columns. Since the nodes p1, p2, p.and pm are respectively positioned in different columns in the descending matrix, and the sum of the numerical values of the nodes p1, p2, p.and pm is larger than the sum of the numerical values of the nodes in other paths, the improvement of the overall recognition accuracy is realized, and the method is suitable for the recognition of similar human faces.
As stated in step S8 above, according to the formula:
Figure BDA0002474004800000101
Linv=argsort(L,reverse=False),and generating a reduction path Q in the descending matrix, wherein the reduction path Q comprises m nodes Q1, Q2, a. Still introduced by the foregoing example of a descending matrix, a descending matrix
Figure BDA0002474004800000102
The corresponding path P comprises nodes P1, P2, P3, and the generate restore path Q comprises nodes Q1, Q2, Q3, according to the formula:
Figure BDA0002474004800000103
Linvas known, Q1 ═ P3, Q2 ═ P2, and Q3 ═ P1 are known.
As described in the above step S9, obtaining the respective corresponding column numbers V1, V2, a.. and Vm of the m nodes q1, q2, a.. and qm in the descending matrix; and establishing a face corresponding relationship, wherein the face corresponding relationship is that the first face to be recognized corresponds to a reference picture represented by the line number V1, the second face to be recognized corresponds to a reference picture represented by the line number V2, and the mth face to be recognized corresponds to a reference picture represented by the line number Vm. Still with the aforementioned descending order matrix
Figure BDA0002474004800000104
By way of example, the column numbers V1, V2 and V3 corresponding to Q1, Q2 and Q3 are respectively a second column, a third column and a first column, so that the first face to be recognized corresponds to the reference picture represented by the second column (since the column numbers are not changed in the matrix processing process, the same column numbers of the matrix in any process represent the same reference picture), the second face to be recognized corresponds to the reference picture represented by the third column, and the third face to be recognized corresponds to the reference picture represented by the first column. Therefore, the corresponding relations of all the face pictures are constructed simultaneously, and the overall recognition efficiency is improved.
In one embodiment, the step S1 of obtaining an environment picture including m faces to be recognized includes:
s1, acquiring a preset video with specified duration collected by a camera, wherein the video comprises m objects to be identified;
s2, extracting t frame pictures from the video, wherein any one of the frame pictures comprises m objects to be identified;
s3, respectively carrying out face cutting and face area detection on the t frame pictures to obtain m face sets to be recognized and t multiplied by m face areas, wherein each face set to be recognized comprises t faces, and each face set to be recognized corresponds to an object to be recognized;
s4, extracting faces with the largest area from the m face sets to be recognized respectively, and accordingly obtaining m faces with the largest area;
s5, replacing m faces of a designated frame picture in the t frame pictures with the m faces with the largest area, and recording the designated frame picture as an environment picture.
As described above, the method and the device realize the acquisition of the environment picture, wherein the environment picture comprises m faces to be recognized. Because of the random environmental picture, there is a problem that is difficult to avoid: some faces are exposed less, so that the analyzable data is insufficient, and the final recognition result is inaccurate. In order to overcome the problem, the method comprises the steps of acquiring a preset video with specified duration acquired by a camera, extracting t frame pictures from the video, respectively carrying out face cutting and face area detection on the t frame pictures, respectively extracting faces with the largest areas from m face sets to be recognized, and replacing m faces of one specified frame picture in the t frame pictures with the m faces with the largest areas, so that the face areas of all faces in the specified frame picture are the largest as possible, the face characteristics of the belonged natural person are not violated, analyzable data are provided as much as possible, and the recognition accuracy is improved.
In one embodiment, the step S4 of calculating the similarity value between the to-be-recognized face feature vector and the reference face feature vector according to a preset similarity calculation method includes:
s401, according to a formula:
Figure BDA0002474004800000111
calculating the similarity value A of the face feature vector to be recognized and the reference face feature vectorFGWherein F is a face feature vector to be recognized, G is a reference face feature vector, and FiIs the ith component vector G of the face feature vector F to be recognizediIs the ith partial vector of the reference face feature vector G.
As described above, the similarity value between the face feature vector to be recognized and the reference face feature vector is calculated according to the preset similarity calculation method. This application uses the formula:
Figure BDA0002474004800000112
Figure BDA0002474004800000113
calculating the similarity value A of the face feature vector to be recognized and the reference face feature vectorFGAnd taking the angle difference between the face feature vector to be recognized and the reference face feature vector as a basis for similarity calculation. Similarity value AFGThe maximum value of (1), when the face feature vector to be recognized is most similar to the reference face feature vector, that is, when the angle difference between the face feature vector to be recognized and the reference face feature vector is 0, the similarity value a is obtainedFGAnd the value is equal to 1, and the corresponding face to be recognized is most similar to the reference face. Thereby realizing similarity calculation with high accuracy.
In one embodiment, the step S6 of obtaining a descending matrix by descending the initial matrix includes:
s601, according to a formula:
Figure BDA0002474004800000121
normalizing the initial matrix to obtain a normalized matrix
Figure BDA0002474004800000122
Wherein
Figure BDA0002474004800000123
Is a normalized matrix
Figure BDA0002474004800000124
The element in the u-th row and the j-th column in the initial matrix is Huj which is a preset normalization coefficient, and Auj is the element in the u-th row and the j-th column in the initial matrix;
s602, performing descending order arrangement on the normalization matrix to obtain a descending order matrix.
As described above, the descending order matrix is obtained by carrying out descending order arrangement on the initial matrix. The application is based on the formula:
Figure BDA0002474004800000125
and carrying out normalization processing on the initial matrix, and then carrying out descending order arrangement on the normalized matrix to obtain a descending order matrix so as to prevent the problem of numerical value deviation obtained by carrying out similarity calculation on different faces. Wherein the normalization process is performed with respect to the sum of the similarity values of the same row. And normalization coefficients Huj are introduced into the normalization process to provide another means of fine-tuning the face recognition. The normalization coefficient Huj can be obtained in any form, such as manually set by a human, or set by integrating statistical historical data, etc. It should be noted that, in the present application, the same or different similarity calculation methods are used for different faces to be recognized, and when different similarity calculation methods are used (however, the same face to be recognized needs to use the same similarity calculation method), the calculation accuracy can be improved (since the most suitable similarity calculation methods for different faces are not necessarily the same, and the normalization matrix provided in the present application can solve the problem of difference caused by different similarity calculation methods, the calculation accuracy does not decrease or increase inversely).
In one embodiment, the step S9 of establishing a face correspondence relationship, where the face correspondence relationship is that a first to-be-recognized face corresponds to a reference picture represented by a column number V1, and a second to-be-recognized face corresponds to a reference picture represented by a column number V2, where an m-th to-be-recognized face corresponds to a reference picture represented by a column number Vm includes:
s901, acquiring m numerical values of corresponding positions of the m nodes q1, q2,. and qm in the descending matrix, and judging whether the m numerical values are all larger than a preset similarity threshold value;
and S902, if the m numerical values are all larger than a preset similarity threshold, establishing a face corresponding relationship, wherein the face corresponding relationship is that the first face to be recognized corresponds to a reference picture represented by the column number V1, the second face to be recognized corresponds to a reference picture represented by the column number V2, and the mth face to be recognized corresponds to a reference picture represented by the column number Vm.
As described above, the face correspondence relationship is established, where the face correspondence relationship is that the first to-be-recognized face corresponds to the reference picture represented by the column number V1, and the second to-be-recognized face corresponds to the reference picture represented by the column number V2. Although the immediate human face corresponding relation established by the method is the optimal corresponding relation, when the data collected by the human face database is incomplete, the human face in the environmental picture is possibly not in the human face database, so that the human face cannot be identified. For the situation, m numerical values of the m nodes q1, q2, a. Further, if the m number of uneven values are greater than the preset similarity threshold, the nodes corresponding to the number which is not greater than the preset similarity threshold are discarded, the face corresponding relation is established based on the remaining nodes, and the face to be recognized corresponding to the discarded nodes is marked as the recognition result which is not found. Thereby improving the identification accuracy.
According to the path-based face recognition method, an environment picture is obtained; calling n reference pictures in a preset human face database; respectively carrying out face feature vector extraction processing on the m faces to be recognized and the n reference pictures so as to obtain m face feature vectors to be recognized,and obtaining n reference face feature vectors; calculating the similarity value of the face feature vector to be recognized and the reference face feature vector so as to obtain m multiplied by n similarity values; generating an mxn initial matrix; performing descending order arrangement on the initial matrix to obtain a descending order matrix; according to the formula: generating a descending index L, where L is argsort (max (a, axi is 1), axi is 0, and reverse is True); generating a path P in the descending matrix; according to the formula:
Figure BDA0002474004800000131
Linvgenerating a reduction path Q in the descending matrix; acquiring column numbers V1, V2, and column numbers Vm of the m nodes q1, q2, and column numbers qm corresponding to the m nodes q2, and column numbers V.. And establishes a face correspondence. Therefore, the adaptive scene recognition is improved, the recognition accuracy is improved, and the recognition efficiency is integrally improved.
Referring to fig. 2, an embodiment of the present application provides a path-based face recognition apparatus, including:
the environment picture acquiring unit 10 is configured to acquire an environment picture, where the environment picture includes m faces to be recognized;
a reference picture retrieving unit 20, configured to retrieve n reference pictures in a preset face database, where each reference picture includes a face;
a face feature vector extraction unit 30, configured to perform face feature vector extraction processing on the m faces to be recognized and the n reference pictures, respectively, so as to obtain m face feature vectors to be recognized corresponding to the m faces to be recognized and n reference face feature vectors corresponding to the n reference pictures;
a similarity value calculation unit 40, configured to calculate, according to a preset similarity calculation method, a similarity value between the to-be-recognized face feature vector and the reference face feature vector, so as to obtain mxn similarity values;
an initial matrix generating unit 50, configured to generate an mxn initial matrix according to the mxn similarity values, where a horizontal row and a vertical column of each element in the mxn initial matrix correspond to a face to be recognized and a reference picture, respectively;
a descending matrix obtaining unit 60, configured to perform descending arrangement on the initial matrix to obtain a descending matrix; and according to the formula: generating a descending index L, where L is argsort (max (a, axi is 1), axi is 0, and reverse is True); the descending order is based on the maximum value of the elements of each row, the argsort function returns the index value of the array value, max is the maximum function, a is the initial matrix, axis is 1 to represent that the corresponding function is executed on each row of the matrix, axis is 0 to represent that the corresponding function is executed on each column of the matrix, and reverse is True to represent the descending order;
a path P generating unit 70, configured to generate a path P in the descending matrix, where the path P is composed of m nodes P1, P2, ·, pm, the nodes P1, P2,. and pm are respectively located in a first row, a second row, and an mth row of the descending matrix, and the nodes P1, P2,. and pm are respectively located in different columns of the descending matrix, and a sum of values of the nodes P1, P2,. and pm is greater than a sum of values of nodes in other paths;
a restoration path Q generating unit 80 for generating a restoration path Q according to the formula:
Figure BDA0002474004800000141
Linvgenerating a reduction path Q in the descending matrix, wherein the reduction path Q comprises m nodes Q1, Q2, a.
A face correspondence establishing unit 90, configured to obtain column numbers V1, V2, and a.. and Vm respectively corresponding to the m nodes q1, q2, and a.. qm in the descending order matrix; and establishing a face corresponding relationship, wherein the face corresponding relationship is that the first face to be recognized corresponds to a reference picture represented by the line number V1, the second face to be recognized corresponds to a reference picture represented by the line number V2, and the mth face to be recognized corresponds to a reference picture represented by the line number Vm.
The operations respectively executed by the above units correspond to the steps of the path-based face recognition method of the foregoing embodiment one to one, and are not described herein again.
In one embodiment, the environment picture acquiring unit includes:
the device comprises a video acquisition subunit, a video recognition subunit and a recognition processing subunit, wherein the video acquisition subunit is used for acquiring a preset video with specified duration collected by a camera, and the video comprises m objects to be recognized;
the frame picture extracting subunit is used for extracting t frame pictures from the video, wherein any one of the t frame pictures comprises m objects to be identified;
a face area obtaining subunit, configured to perform face segmentation and face area detection on the t frame pictures, respectively, so as to obtain m to-be-recognized face sets and t × m face areas, where each to-be-recognized face set includes t faces, and each to-be-recognized face set corresponds to an object to be recognized;
the largest-area face extraction subunit is used for extracting the largest-area faces from the m face sets to be recognized respectively so as to obtain m faces with the largest areas;
and the environment picture marking subunit is used for replacing m human faces of a specified frame picture in the t frame pictures with the m human faces with the largest area and marking the specified frame picture as an environment picture.
The operations respectively executed by the subunits correspond to the steps of the path-based face recognition method of the foregoing embodiment one by one, and are not described herein again.
In one embodiment, the similarity value calculation unit includes:
a similarity value calculating subunit, configured to:
Figure BDA0002474004800000151
calculating the similarity value A of the face feature vector to be recognized and the reference face feature vectorFGWherein F is a face feature vector to be recognized, G is a reference face feature vector, and FiIs the ith component vector G of the face feature vector F to be recognizediIs the ith partial vector of the reference face feature vector G.
The operations respectively executed by the subunits correspond to the steps of the path-based face recognition method of the foregoing embodiment one by one, and are not described herein again.
In one embodiment, the descending matrix obtaining unit includes:
a normalization processing subunit configured to, according to a formula:
Figure BDA0002474004800000152
normalizing the initial matrix to obtain a normalized matrix
Figure BDA0002474004800000161
Wherein
Figure BDA0002474004800000162
Is a normalized matrix
Figure BDA0002474004800000163
The element in the u-th row and the j-th column in the initial matrix is Huj which is a preset normalization coefficient, and Auj is the element in the u-th row and the j-th column in the initial matrix;
and the descending matrix obtaining subunit is used for carrying out descending arrangement on the normalized matrix to obtain a descending matrix.
The operations respectively executed by the subunits correspond to the steps of the path-based face recognition method of the foregoing embodiment one by one, and are not described herein again.
In one embodiment, the face correspondence relationship establishing unit includes:
the m number value acquisition subunits are used for acquiring m number values of corresponding positions of the m nodes q1, q2, q.
A face correspondence establishing subunit, configured to establish a face correspondence if the m numerical values are all greater than a preset similarity threshold, where the face correspondence is that a first to-be-recognized face corresponds to a reference picture represented by the column number V1, and a second to-be-recognized face corresponds to a reference picture represented by the column number V2.
The operations respectively executed by the subunits correspond to the steps of the path-based face recognition method of the foregoing embodiment one by one, and are not described herein again.
The path-based face recognition device acquires an environment picture; calling n reference pictures in a preset human face database; respectively carrying out face feature vector extraction processing on the m faces to be recognized and the n reference pictures so as to obtain m face feature vectors to be recognized and n reference face feature vectors; calculating the similarity value of the face feature vector to be recognized and the reference face feature vector so as to obtain m multiplied by n similarity values; generating an mxn initial matrix; performing descending order arrangement on the initial matrix to obtain a descending order matrix; according to the formula: generating a descending index L, where L is argsort (max (a, axi is 1), axi is 0, and reverse is True); generating a path P in the descending matrix; according to the formula:
Figure BDA0002474004800000164
Linvgenerating a reduction path Q in the descending matrix; acquiring column numbers V1, V2, and column numbers Vm of the m nodes q1, q2, and column numbers qm corresponding to the m nodes q2, and column numbers V.. And establishes a face correspondence. Therefore, the adaptive scene recognition is improved, the recognition accuracy is improved, and the recognition efficiency is integrally improved.
Referring to fig. 3, an embodiment of the present invention further provides a computer device, where the computer device may be a server, and an internal structure of the computer device may be as shown in the figure. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer designed processor is used to provide computational and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The memory provides an environment for the operation of the operating system and the computer program in the non-volatile storage medium. The database of the computer device is used for storing data used by the path-based face recognition method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a path-based face recognition method.
The processor executes the above-mentioned method for recognizing a face based on a path, wherein the steps of the method are respectively in one-to-one correspondence with the steps of executing the method for recognizing a face based on a path according to the above-mentioned embodiment, and are not described herein again.
It will be understood by those skilled in the art that the structures shown in the drawings are only block diagrams of some of the structures associated with the embodiments of the present application and do not constitute a limitation on the computer apparatus to which the embodiments of the present application may be applied.
The computer equipment acquires an environment picture; calling n reference pictures in a preset human face database; respectively carrying out face feature vector extraction processing on the m faces to be recognized and the n reference pictures so as to obtain m face feature vectors to be recognized and n reference face feature vectors; calculating the similarity value of the face feature vector to be recognized and the reference face feature vector so as to obtain m multiplied by n similarity values; generating an mxn initial matrix; performing descending order arrangement on the initial matrix to obtain a descending order matrix; according to the formula: generating a descending index L, where L is argsort (max (a, axi is 1), axi is 0, and reverse is True); generating a path P in the descending matrix; according to the formula:
Figure BDA0002474004800000171
Linvgenerating a reduction path Q in the descending matrix; acquiring column numbers V1, V2, and column numbers Vm of the m nodes q1, q2, and column numbers qm corresponding to the m nodes q2, and column numbers V.. And establishes a face correspondence. Therefore, the adaptive scene recognition is improved, the recognition accuracy is improved, and the recognition efficiency is integrally improved.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored thereon, and when the computer program is executed by a processor, the method for path-based face recognition is implemented, where steps included in the method correspond to steps of the method for path-based face recognition in the foregoing embodiment one to one, and are not described herein again.
The computer-readable storage medium of the present application, obtaining an environmental picture; calling n reference pictures in a preset human face database; respectively carrying out face feature vector extraction processing on the m faces to be recognized and the n reference pictures so as to obtain m face feature vectors to be recognized and n reference face feature vectors; calculating the similarity value of the face feature vector to be recognized and the reference face feature vector so as to obtain m multiplied by n similarity values; generating an mxn initial matrix; performing descending order arrangement on the initial matrix to obtain a descending order matrix; according to the formula: generating a descending index L, where L is argsort (max (a, axi is 1), axi is 0, and reverse is True); generating a path P in the descending matrix; according to the formula:
Figure BDA0002474004800000181
Linvgenerating a reduction path Q in the descending matrix; acquiring column numbers V1, V2, and column numbers Vm of the m nodes q1, q2, and column numbers qm corresponding to the m nodes q2, and column numbers V.. And establishes a face correspondence. Therefore, the adaptive scene recognition is improved, the recognition accuracy is improved, and the recognition efficiency is integrally improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium provided herein and used in the examples may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A face recognition method based on a path is characterized by comprising the following steps:
acquiring an environment picture, wherein the environment picture comprises m faces to be recognized;
calling n reference pictures in a preset face database, wherein each reference picture comprises a face;
respectively carrying out face feature vector extraction processing on the m faces to be recognized and the n reference pictures so as to obtain m face feature vectors to be recognized corresponding to the m faces to be recognized and n reference face feature vectors corresponding to the n reference pictures;
according to a preset similarity calculation method, calculating similarity values of the face feature vector to be recognized and the reference face feature vector so as to obtain m multiplied by n similarity values;
generating an mxn initial matrix according to the mxn similarity values, wherein horizontal rows and vertical columns of each element in the mxn initial matrix respectively correspond to a face to be recognized and a reference picture;
performing descending order arrangement on the initial matrix to obtain a descending order matrix; and according to the formula: generating a descending index L, where L is argsort (max (a, axi is 1), axi is 0, and reverse is True); the descending order is based on the maximum value of the elements of each row, the argsort function returns the index value of the array value, max is the maximum function, a is the initial matrix, axis is 1 to represent that the corresponding function is executed on each row of the matrix, axis is 0 to represent that the corresponding function is executed on each column of the matrix, and reverse is True to represent the descending order;
generating a path P in the descending matrix, wherein the path P is composed of m nodes P1, P2, and p.a., pm, the nodes P1, P2, and p.a., pm are respectively positioned in a first row, a second row, and a m-th row in the descending matrix, the nodes P1, P2, and p.a., pm are respectively positioned in different columns in the descending matrix, and the sum of the numerical values of the nodes P1, P2, and p.a., pm is larger than the sum of the numerical values of the nodes in other paths;
according to the formula:
Figure FDA0002474004790000011
Linvgenerating a reduction path Q in the descending matrix, wherein the reduction path Q includes m nodes Q1, Q2, and Q, and argsort (L, reverse = False) represents that the descending index L is sorted in an ascending order;
acquiring column numbers V1, V2, and column numbers Vm of the m nodes q1, q2, and column numbers qm corresponding to the m nodes q2, and column numbers V.. And establishing a face corresponding relationship, wherein the face corresponding relationship is that the first face to be recognized corresponds to a reference picture represented by the line number V1, the second face to be recognized corresponds to a reference picture represented by the line number V2, and the mth face to be recognized corresponds to a reference picture represented by the line number Vm.
2. The method according to claim 1, wherein the step of obtaining an environmental picture including m faces to be recognized comprises:
acquiring a preset video with specified duration collected by a camera, wherein the video comprises m objects to be identified;
extracting t frame pictures from the video, wherein any one of the frame pictures comprises m objects to be identified;
respectively carrying out face cutting and face area detection on the t frame pictures so as to obtain m face sets to be recognized and t multiplied by m face areas, wherein each face set to be recognized comprises t faces, and each face set to be recognized corresponds to an object to be recognized;
extracting the faces with the largest area from the m face sets to be recognized respectively so as to obtain m faces with the largest area;
and replacing m human faces of a specified frame picture in the t frame pictures with the m human faces with the largest area, and recording the specified frame picture as an environment picture.
3. The method for path-based face recognition according to claim 1, wherein the step of calculating the similarity value between the face feature vector to be recognized and the reference face feature vector according to a preset similarity calculation method comprises:
according to the formula:
Figure FDA0002474004790000021
calculating the similarity value A of the face feature vector to be recognized and the reference face feature vectorFGWherein F is a face feature vector to be recognized, G is a reference face feature vector, and FiIs the ith component vector G of the face feature vector F to be recognizediIs the ith partial vector of the reference face feature vector G.
4. The method for path-based face recognition according to claim 1, wherein the step of obtaining a descending matrix by descending the initial matrix comprises:
according to the formula:
Figure FDA0002474004790000022
normalizing the initial matrix to obtain a normalized matrix
Figure FDA0002474004790000023
Wherein
Figure FDA0002474004790000024
Is a normalized matrix
Figure FDA0002474004790000025
The element in the u-th row and the j-th column in the initial matrix is Huj which is a preset normalization coefficient, and Auj is the element in the u-th row and the j-th column in the initial matrix;
and performing descending order arrangement on the normalized matrix to obtain a descending order matrix.
5. The method for path-based face recognition according to claim 1, wherein the step of establishing a face correspondence relationship, the face correspondence relationship being that a first face to be recognized corresponds to a reference picture represented by a column number V1, and a second face to be recognized corresponds to a reference picture represented by a column number V2.
Acquiring m numerical values of corresponding positions of the m nodes q1, q2, q.
If the m numerical values are all larger than a preset similarity threshold, establishing a face corresponding relationship, wherein the face corresponding relationship is that the first face to be recognized corresponds to a reference picture represented by the column number V1, and the second face to be recognized corresponds to a reference picture represented by the column number V2.
6. A path-based face recognition apparatus, comprising:
the environment picture acquiring unit is used for acquiring an environment picture, and the environment picture comprises m faces to be recognized;
the reference picture calling unit is used for calling n reference pictures in a preset face database, and each reference picture comprises a face;
a face feature vector extraction unit, configured to perform face feature vector extraction processing on the m faces to be recognized and the n reference pictures, respectively, so as to obtain m face feature vectors to be recognized corresponding to the m faces to be recognized and n reference face feature vectors corresponding to the n reference pictures;
the similarity value calculation unit is used for calculating the similarity value of the face feature vector to be recognized and the reference face feature vector according to a preset similarity calculation method so as to obtain mxn similarity values;
an initial matrix generating unit, configured to generate an mxn initial matrix according to the mxn similarity values, where a horizontal row and a vertical column of each element in the mxn initial matrix correspond to a face to be recognized and a reference picture, respectively;
the descending matrix obtaining unit is used for carrying out descending arrangement on the initial matrix to obtain a descending matrix; and according to the formula: generating a descending index L, where L is argsort (max (a, axi is 1), axi is 0, and reverse is True); the descending order is based on the maximum value of the elements of each row, the argsort function returns the index value of the array value, max is the maximum function, a is the initial matrix, axis is 1 to represent that the corresponding function is executed on each row of the matrix, axis is 0 to represent that the corresponding function is executed on each column of the matrix, and reverse is True to represent the descending order;
a path P generation unit, configured to generate a path P in the descending matrix, where the path P is composed of m nodes P1, P2, and p... and pm, the nodes P1, P2, and p.and pm are located in a first row, a second row, and a.m. row of the descending matrix, respectively, and the nodes P1, P2, and p.and pm are located in different columns of the descending matrix, respectively, and a sum of values of the nodes P1, P2, and p.m is greater than a sum of values of nodes in other paths;
a restoration path Q generating unit, configured to:
Figure FDA0002474004790000041
generating a reduction path Q in the descending matrix, wherein the reduction path Q comprises m nodes Q1, Q2,. and qm, and argsort (L, reverse = False) represents that descending indexes L are sorted in an ascending order;
a face corresponding relation establishing unit, configured to obtain column numbers V1, V2, and a corresponding column number Vm of the m nodes q1, q2, a corresponding. And establishing a face corresponding relationship, wherein the face corresponding relationship is that the first face to be recognized corresponds to a reference picture represented by the line number V1, the second face to be recognized corresponds to a reference picture represented by the line number V2, and the mth face to be recognized corresponds to a reference picture represented by the line number Vm.
7. The path-based face recognition device according to claim 6, wherein the environment picture obtaining unit comprises:
the device comprises a video acquisition subunit, a video recognition subunit and a recognition processing subunit, wherein the video acquisition subunit is used for acquiring a preset video with specified duration collected by a camera, and the video comprises m objects to be recognized;
the frame picture extracting subunit is used for extracting t frame pictures from the video, wherein any one of the frame pictures comprises m objects to be identified;
a face area obtaining subunit, configured to perform face segmentation and face area detection on the t frame pictures, respectively, so as to obtain m to-be-recognized face sets and t × m face areas, where each to-be-recognized face set includes t faces, and each to-be-recognized face set corresponds to an object to be recognized;
the largest-area face extraction subunit is used for extracting the largest-area faces from the m face sets to be recognized respectively so as to obtain m faces with the largest areas;
and the environment picture marking subunit is used for replacing m human faces of a specified frame picture in the t frame pictures with the m human faces with the largest area and marking the specified frame picture as an environment picture.
8. The path-based face recognition device according to claim 6, wherein the similarity value calculation unit includes:
a similarity value calculating subunit, configured to:
Figure FDA0002474004790000051
calculating the similarity value A of the face feature vector to be recognized and the reference face feature vectorFGWherein F is a face feature vector to be recognized, G is a reference face feature vector, and FiIs the ith component vector G of the face feature vector F to be recognizediIs the ith partial vector of the reference face feature vector G.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 5.
CN202010357554.2A 2020-04-29 2020-04-29 Path-based face recognition method and device, computer equipment and storage medium Active CN111723647B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010357554.2A CN111723647B (en) 2020-04-29 2020-04-29 Path-based face recognition method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010357554.2A CN111723647B (en) 2020-04-29 2020-04-29 Path-based face recognition method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111723647A CN111723647A (en) 2020-09-29
CN111723647B true CN111723647B (en) 2022-04-15

Family

ID=72564184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010357554.2A Active CN111723647B (en) 2020-04-29 2020-04-29 Path-based face recognition method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111723647B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108499A (en) * 2018-02-07 2018-06-01 腾讯科技(深圳)有限公司 Face retrieval method, apparatus, storage medium and equipment
CN109214273A (en) * 2018-07-18 2019-01-15 平安科技(深圳)有限公司 Facial image comparison method, device, computer equipment and storage medium
CN109815845A (en) * 2018-12-29 2019-05-28 深圳前海达闼云端智能科技有限公司 Face recognition method and device and storage medium
CN110110593A (en) * 2019-03-27 2019-08-09 广州杰赛科技股份有限公司 Face Work attendance method, device, equipment and storage medium based on self study
CN110879984A (en) * 2019-11-18 2020-03-13 上海眼控科技股份有限公司 Face comparison method and device
CN110889433A (en) * 2019-10-29 2020-03-17 平安科技(深圳)有限公司 Face clustering method and device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108499A (en) * 2018-02-07 2018-06-01 腾讯科技(深圳)有限公司 Face retrieval method, apparatus, storage medium and equipment
CN109214273A (en) * 2018-07-18 2019-01-15 平安科技(深圳)有限公司 Facial image comparison method, device, computer equipment and storage medium
CN109815845A (en) * 2018-12-29 2019-05-28 深圳前海达闼云端智能科技有限公司 Face recognition method and device and storage medium
CN110110593A (en) * 2019-03-27 2019-08-09 广州杰赛科技股份有限公司 Face Work attendance method, device, equipment and storage medium based on self study
CN110889433A (en) * 2019-10-29 2020-03-17 平安科技(深圳)有限公司 Face clustering method and device, computer equipment and storage medium
CN110879984A (en) * 2019-11-18 2020-03-13 上海眼控科技股份有限公司 Face comparison method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
广义并行2维复判别分析的人脸识别;刘万军 等;《中国图象图形学报》;20180930;第23卷(第9期);第1359-1370页 *

Also Published As

Publication number Publication date
CN111723647A (en) 2020-09-29

Similar Documents

Publication Publication Date Title
CN111860147B (en) Pedestrian re-identification model optimization processing method and device and computer equipment
Varadharajan et al. Automatic attendance management system using face detection
CN110110601B (en) Video pedestrian re-recognition method and device based on multi-time space attention model
CN110399799B (en) Image recognition and neural network model training method, device and system
CN111340123A (en) Image score label prediction method based on deep convolutional neural network
CN110147710B (en) Method and device for processing human face features and storage medium
CN111783748A (en) Face recognition method and device, electronic equipment and storage medium
WO2021082562A1 (en) Spoofing detection method and apparatus, electronic device, storage medium and program product
CN111612024B (en) Feature extraction method, device, electronic equipment and computer readable storage medium
CN108921038A (en) A kind of classroom based on deep learning face recognition technology is quickly called the roll method of registering
CN111507138A (en) Image recognition method and device, computer equipment and storage medium
CN116110100A (en) Face recognition method, device, computer equipment and storage medium
CN110827432A (en) Class attendance checking method and system based on face recognition
CN112911385A (en) Method, device and equipment for extracting picture to be identified and storage medium
CN111582027A (en) Identity authentication method and device, computer equipment and storage medium
CN111723647B (en) Path-based face recognition method and device, computer equipment and storage medium
CN116304179B (en) Data processing system for acquiring target video
CN108388869B (en) Handwritten data classification method and system based on multiple manifold
CN111708906B (en) Visiting retrieval method, device and equipment based on face recognition and storage medium
CN112001285A (en) Method, device, terminal and medium for processing beautifying image
CN115115552B (en) Image correction model training method, image correction device and computer equipment
CN106971157B (en) Identity coupling identification method based on multiple linear regression association memory model
CN111444957A (en) Image data processing method, image data processing device, computer equipment and storage medium
CN111079585A (en) Image enhancement and pseudo-twin convolution neural network combined pedestrian re-identification method based on deep learning
CN114973368A (en) Face recognition method, device, equipment and storage medium based on feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant