CN115273168A - Face recognition method, face recognition device, storage medium and electronic equipment - Google Patents

Face recognition method, face recognition device, storage medium and electronic equipment Download PDF

Info

Publication number
CN115273168A
CN115273168A CN202110488352.6A CN202110488352A CN115273168A CN 115273168 A CN115273168 A CN 115273168A CN 202110488352 A CN202110488352 A CN 202110488352A CN 115273168 A CN115273168 A CN 115273168A
Authority
CN
China
Prior art keywords
face
characteristic value
class
recognized
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110488352.6A
Other languages
Chinese (zh)
Inventor
杨太任
成云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110488352.6A priority Critical patent/CN115273168A/en
Publication of CN115273168A publication Critical patent/CN115273168A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a face recognition method, a face recognition device, a storage medium and electronic equipment. The method is applied to an edge gateway, and comprises the following steps: the method comprises the steps of receiving a first video file sent by a camera, extracting face characteristic values of faces to be recognized in the first video file, and carrying out first recognition processing on the faces to be recognized based on the face characteristic values and various pre-stored class center characteristic values to obtain a first recognition result, wherein the class center characteristic values are the center values of all face characteristic values of all specified faces. The face to be recognized is recognized through the edge gateway, and the edge gateway can recognize the face only by comparing the pre-stored center characteristic values of various classes with the face characteristic value of the face to be recognized. Compared with the prior art that the server loads all face characteristic values, the loaded data size is small, and the face recognition efficiency can be improved on the basis of realizing face recognition.

Description

Face recognition method, face recognition device, storage medium and electronic equipment
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a face recognition method and apparatus, a storage medium, and an electronic device.
Background
With the development of internet technology, more and more intelligent devices are being brought into people's daily life. For example, the smart camera can be used for indoor real-time monitoring, can be applied to access control verification channels, and can also be applied to other scenes such as identity recognition in stations. When the intelligent camera is applied to the scenes, the intelligent camera is mainly used for face recognition. When the face recognition is performed in the above-mentioned scene, the server usually scans the relational database in full, loads all face characteristic values in the database into a computer memory or a GPU video memory, and then performs face characteristic value comparison, thereby implementing a function of real-time face recognition. In the prior art, the server scans the database in a full amount and loads all face characteristic values, so that the loading speed is low, and the face recognition efficiency is reduced.
Disclosure of Invention
The embodiment of the application provides a face recognition method, a face recognition device, a computer storage medium and electronic equipment, and aims to solve the technical problem of how to improve the face recognition efficiency in an application scene of an intelligent camera. The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a face recognition method, which is applied to an edge gateway, and the method includes:
receiving a first video file sent by a camera, and extracting a face characteristic value of a face to be recognized in the first video file;
and performing first recognition processing on the face to be recognized based on the face characteristic values and various pre-stored class central characteristic values to obtain a first recognition result, wherein the class central characteristic values are the central values of all face characteristic values of each specified face.
In a second aspect, an embodiment of the present application provides a face recognition method, which is applied to a server, and the method includes:
receiving a face characteristic value of a face to be recognized and a face type identification of the face to be recognized, which are sent by an edge gateway;
and if the face type identification indicates that the face to be recognized is a strange user, performing second recognition processing on the face characteristic value and each type of central characteristic value loaded from the database, and updating each type of central characteristic value based on a second recognition result, wherein the type of central characteristic value is the central value of all face characteristic values of each designated face.
In a third aspect, an embodiment of the present application provides a face recognition apparatus, which is applied to an edge gateway, where the apparatus includes:
the video receiving module is used for receiving a first video file sent by a camera and extracting a face characteristic value of a face to be recognized in the first video file;
and the face recognition module is used for carrying out first recognition processing on the face to be recognized based on the face characteristic value and each class central characteristic value which is stored in advance to obtain a first recognition result, wherein the class central characteristic value is the central value of all face characteristic values of each appointed face.
In a fourth aspect, an embodiment of the present application provides a face recognition apparatus, which is applied to a server, and the apparatus includes:
the first receiving module is used for receiving the face characteristic value of the face to be recognized and the face type identification of the face to be recognized, which are sent by the edge gateway;
and the second recognition module is used for performing second recognition processing on the face characteristic value and each class central characteristic value loaded from the database if the face class identification indicates that the face to be recognized is a strange user, and updating each class central characteristic value based on a second recognition result, wherein the class central characteristic value is the central value of all face characteristic values of each designated face.
In a fifth aspect, embodiments of the present application provide a computer storage medium having a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a sixth aspect, an embodiment of the present application provides an electronic device, which may include: a memory and a processor; wherein the memory stores a computer program adapted to be loaded from the memory and to perform the above-mentioned method steps.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
when the scheme of the embodiment of the application is executed, the edge gateway receives a first video file sent by a camera, extracts the face characteristic value of a face to be recognized in the first video file, and performs first recognition processing on the face to be recognized based on the face characteristic value and various types of central characteristic values stored in advance to obtain a first recognition result, wherein the type of central characteristic values are central values of all face characteristic values of each designated face. The face to be recognized is recognized through the edge gateway, and the edge gateway can recognize the face only by comparing the pre-stored center characteristic values of various classes with the face characteristic value of the face to be recognized. Compared with the prior art that the server loads all face characteristic values, the loaded data size is small, and the face recognition efficiency can be improved on the basis of realizing face recognition.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a system architecture of a face recognition method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a face recognition method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another face recognition method according to an embodiment of the present application;
fig. 4 is a schematic diagram of a system architecture of another face recognition method according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another face recognition method according to an embodiment of the present application;
fig. 6 is a schematic flowchart of another face recognition method according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a physical structure of a database provided in an embodiment of the present application;
fig. 8 is a schematic display interface diagram of a terminal according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an edge gateway provided in an embodiment of the present application;
fig. 12 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the embodiments of the present application more obvious and understandable, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present application, it is to be noted that, unless otherwise explicitly specified and limited, the words "comprise" and "have" and any variations thereof are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Fig. 1 is a schematic view of a system architecture of a face recognition method according to an embodiment of the present application. As shown in fig. 1, includes a camera 101, an edge gateway 102, and a server 103.
It should be understood that the server 103 may be an electronic device with a face recognition function, or may be a separate server device, such as: rack, blade, tower or cabinet type server equipment, or hardware equipment with stronger computing power such as a workstation and a large computer; the server cluster can also be a server cluster formed by a plurality of servers, each server in the service cluster can be formed in a symmetrical mode, wherein each server has equivalent functions and equivalent positions in a service link, each server can provide services for the outside independently, and the independent service can be understood as the assistance without other servers.
It should be understood that the edge gateway 102 may be an electronic device with a face recognition function, or a hardware device with a relatively high computing power, such as a workstation or a mainframe computer; the service method may also be an edge gateway cluster composed of a plurality of edge gateways, each edge gateway in the edge gateway cluster may be composed in a symmetric manner, where each edge gateway has an equivalent function and an equivalent status in a service link, and each edge gateway can separately provide services to the outside, where the separate provision of services may be understood as not requiring the assistance of another edge gateway.
The camera 101 and the edge gateway 102, the edge gateway 102 and the server 103 are in interactive communication via a network, which may be a wireless network including but not limited to a cellular network, a wireless local area network, an infrared network or a bluetooth network, or a wired network including but not limited to an ethernet, a Universal Serial Bus (USB) or a controller area network.
In some embodiments, data (e.g., target compression packets) exchanged over a network is represented using techniques and/or formats including HyperText Mark-up Language (HTML), extensible Mark-up Language (XML), and the like. All or some of the links may also be encrypted using conventional encryption techniques such as Secure Socket Layer (SSL), transport Layer Security (TLS), virtual Private Network (VPN), internet Protocol Security (IPsec). In other embodiments, custom and/or dedicated data communication techniques may also be used in place of, or in addition to, the data communication techniques described above.
According to some embodiments, a video is collected by a camera and sent to an edge gateway, the edge gateway identifies faces to be identified in the video, when the faces to be identified are not identified by the edge gateway, the edge gateway sends face feature values of the faces to be identified to a server, the server loads class center feature values of various faces stored in a database, the class center feature values are center values of face feature values corresponding to various classes of faces, and the server performs clustering processing based on the face feature values of the faces to be identified and the class center feature values of the various faces.
The face recognition method provided by the system architecture is generally executed by the edge gateway 102, and accordingly, the face recognition apparatus is generally disposed in the edge gateway 102, but the present application is not limited thereto.
The present application will be described in detail with reference to specific examples.
Fig. 2 is a schematic flow chart of a face recognition method according to an embodiment of the present application. As shown in fig. 2, the method of the embodiment of the present application may rely on the system architecture shown in fig. 1, and may include the following steps:
s201, receiving a first video file sent by a camera, and extracting a face characteristic value of a face to be recognized in the first video file.
For the embodiment of the present application, the main execution body of each step is an edge gateway. It is understood that an edge gateway is one of three hardware modalities of edge computing. The rapid development of the internet of things enables users to enter the cloud era, but a large amount of data is generated in cloud application, and the application of the internet of things may require extremely fast response time, data privacy and data effectiveness. All generated data are transmitted to the cloud computing center for cloud center processing, which increases network load, delays data processing, and occupies resources with a large amount of garbage data. With the promotion of the internet of things and cloud services, it is assumed that data is generated, processed, and analyzed at the edge of a network before being transmitted to a cloud computing center, which is edge computing.
The first video file may be a video file corresponding to a video or a picture acquired by the camera in the detection area. The first video file may include a video or picture of one user or a plurality of users. The faces to be recognized are then in fact the faces of each user to be recognized in the first video file. Each face to be recognized corresponds to one or more face photos, and the face photos can be extracted from a video or pictures collected by a camera. In the embodiment of the present application, only the case that there is one face photo of the user to be recognized at the same time in the first video file is discussed, and the case that there are multiple face photos of the user to be recognized at the same time in the first video file is the same as the embodiment of the user, which is not described herein again, and reference may be made to the embodiment of the user.
The face feature value is an information set composed of face features, and can be understood as a face feature vector. Each face can be marked with a plurality of feature points, such as 21 feature points, 29 feature points, 68 feature points, 81 feature points and the like, the feature points comprise five sense organs and facial contours, and the feature vector extractor obtains a feature vector of each face according to the feature points. Each face has a feature vector in the algorithm, and the feature vector is the basis for face comparison. The feature vectors extracted from different face photos of the same person are very close to each other in the feature space, and the face photos of different persons are far away from each other in the feature space. In the feature space. If the distance between the two face feature vectors is smaller than a specific value, the two face feature vectors are considered to correspond to the face photos as the faces of the same person; and if the distance between the two face feature vectors exceeds a specific value, the face photos corresponding to the two face feature vectors are not the faces of the same person.
Specifically, the edge gateway receives a first video file from the camera, the first video file is a video file corresponding to a video or a picture acquired by the camera in the detection area, the edge gateway extracts a face photo of the same user to be recognized in the first video file, namely, a face photo of the same face to be recognized, and then the feature extractor is used for extracting face feature values of all face photos of the same face to be recognized.
S202, performing first recognition processing on the face to be recognized based on the face feature value and each class center feature value stored in advance to obtain a first recognition result, wherein the class center feature value is the center value of all face feature values of each specified face.
Each pre-stored class center feature value is a class center feature value of each designated face, and each designated face can be understood as a face of a user with known identity information, that is, the edge gateway can pre-store at least one class center feature value corresponding to the face of the user with known identity information. The class center feature value can be understood as the center value of the face feature values corresponding to all face photos of each designated face. Because each face feature value represents a face feature vector, the center value of all face feature values is the center vector of all face feature vectors in the feature space, the center vector of all face feature values is a class center feature value, and the class center feature value of each designated face can be used for representing the face feature value specific to each designated face.
Specifically, the edge gateway performs a first recognition process on the face to be recognized to obtain a first recognition result, which may be understood as comparing a face feature value of the face to be recognized with a face feature value specific to each pre-stored designated face. That is to say, the face characteristic value of the face to be recognized is compared with the various class center characteristic values stored in advance, so that the recognition process of the face to be recognized by the edge gateway can be realized. It should be noted that, if the face to be recognized corresponds to one face photograph, the face to be recognized corresponds to one face feature value, and the face feature value may be directly compared with each class of central feature values to obtain a recognition result. If the face to be recognized corresponds to a plurality of face photos, the face to be recognized corresponds to a plurality of face characteristic values, at this time, the central values of the plurality of face characteristic values can be calculated, the class central characteristic value of the face to be recognized is calculated according to the above, and then the class central characteristic value of the face to be recognized is compared with each class central characteristic value to obtain a recognition result.
Specifically, no matter the face feature value of the face to be recognized is compared with each class center feature value, or the class center feature value of the face to be recognized is compared with each class center feature value, the comparison process can be realized by adopting the similarity between the feature values in the embodiment of the application. Namely, the similarity between the face characteristic value of the face to be recognized and each class center characteristic value is calculated, or the similarity between the class center characteristic value of the face to be recognized and each class center characteristic value is calculated. If the similarity is greater than the similarity threshold value, determining that the first recognition result is that the edge gateway recognizes the face to be recognized; if the similarity larger than the similarity threshold does not exist in the similarity, determining that the first recognition result is that the edge gateway does not recognize the face to be recognized.
It is understood that, in the embodiment of the present application, the similarity between the feature values is calculated, and the cosine distance between the feature values may be selected and calculated. Generally, euclidean distance or cosine distance can be selected to measure the distance between two vectors. Euclidean distance is the most common distance metric used to compute the absolute distance between points in a multidimensional space. The cosine distance is used for calculating cosine values of included angles of two vectors in a vector space and is used for measuring the relative difference between two individuals. Cosine similarity focuses more on the difference in direction of two vectors than on distance measurement, rather than distance or length. Therefore, for the face recognition method of the embodiment of the present application, it is more appropriate to use the cosine distance as the similarity to describe the relationship between two feature vectors compared to the euclidean distance.
When the scheme of the embodiment of the application is executed, the edge gateway receives a first video file sent by a camera, extracts the face characteristic value of a face to be recognized in the first video file, and performs first recognition processing on the face to be recognized based on the face characteristic value and various types of central characteristic values stored in advance to obtain a first recognition result, wherein the type of central characteristic values are central values of all face characteristic values of each designated face. The face to be recognized is recognized through the edge gateway, and the edge gateway can recognize the face only by comparing the pre-stored center characteristic values of various classes with the face characteristic value of the face to be recognized. Compared with the prior art that the server loads all face characteristic values, the loaded data size is small, and the face recognition efficiency can be improved on the basis of realizing face recognition.
Please refer to fig. 3, which is a schematic flow chart of a face recognition method according to an embodiment of the present application. As shown in fig. 3, the method according to the embodiment of the present application may rely on the system architecture shown in fig. 1, and may include the following steps, which are described only by taking an execution subject of each step as an edge gateway:
s301, receiving a target face characteristic value and a target class identification of at least one designated face sent by a server, and respectively calculating class center characteristic values corresponding to the target class identifications based on the target face characteristic value.
The designated face is used for representing the face of a user with known identity information, the target face characteristic value is the face characteristic value of a face photo corresponding to the designated face, the target class identification is used for identifying the designated face, and the target class identification is a string of unique numbers.
Specifically, the edge gateway receives a target face feature value and a target class identifier of at least one designated face sent by the server, and the edge gateway may calculate a class center feature value of each designated face according to the target face feature value of each designated face, that is, calculate a center vector of all target face feature values of each designated face in a feature space, and use the center vector as the class center feature value. The target class identification is used for identifying each designated face, each designated face has a class center characteristic value, and the target class identification and the class center characteristic value of the designated face are in one-to-one correspondence. The edge gateway may then locally store the target class identification and class center feature value for the at least one specified face.
It is understood that the server may periodically send the target face feature value of at least one designated face and the target class identification to the edge gateway. The server can also send the target face characteristic value and the target class identification of at least one designated face to the edge gateway in real time. The server can also send a target face characteristic value and a target class identifier of at least one designated face corresponding to the area where the edge gateway is located according to the area where each edge gateway is located. For example, province A includes city a, city b, city c, city d, and city e. And the server sends the target face characteristic values and the target class identifications of the users with known identity information in the five cities of a, b, c, d and e to the edge gateway of the area where each user is located. The server can also send a target face characteristic value and a target class identifier of at least one designated face to each edge gateway according to the target class identifier. For example, the target class identifier is a string of 6-digit numbers, the target face feature value and the target class identifier of the specified face of the class identifier with the features of 0xxxxx are sent to the edge gateway 1, the target face feature value and the target class identifier of the specified face of the class identifier with the features of 1xxxxx are sent to the edge gateway 2, the target face feature value and the target class identifier of the specified face of the class identifier with the features of 2xxxxx are sent to the edge gateway 3, and so on, the target face feature value and the target class identifier of at least one specified face are sent to each edge gateway according to the features of the target class identifier.
S302, receiving a first video file sent by a camera, and extracting a face characteristic value of a face to be recognized in the first video file.
And S303, performing first identification processing on the face to be identified based on the face characteristic value and each class central characteristic value which is stored in advance to obtain a first identification result, wherein the class central characteristic value is the central value of all face characteristic values of each specified face.
Specifically, S302 and S303 can refer to S201 and S202 in fig. 2, and are not described herein again.
S304, if the first recognition result indicates that a first target class central characteristic value matched with the face characteristic value does not exist in the class central characteristic values, generating a strange class identification, and determining the face class identification of the face to be recognized as the strange class identification.
Specifically, according to S303, if the first recognition result indicates that the edge gateway does not recognize the face to be recognized, that is, the first target class center feature value whose similarity with the face feature value of the face to be recognized is greater than the similarity threshold does not exist in the class center feature values pre-stored by the edge gateway. Then, because the edge gateway identifies the face to be identified as the face of a stranger, the edge gateway may generate a strange type identifier for the face to be identified, where the strange type identifier is the same as the target type identifier, and is used to identify the current face to be identified, and is a string of unique numbers, and therefore, the edge gateway may use the strange type identifier as the face type identifier of the face to be identified.
S305, the face characteristic value and the strange type identification are sent to a server, so that the server conducts second recognition processing on the face to be recognized based on the face characteristic value and each type center characteristic value loaded from a database.
Specifically, after determining that the face type identifier of the face to be recognized is a strange type identifier, the edge gateway may send the face feature value of the face to be recognized and the strange type identifier to the server, so that the server may perform a second recognition process on the face to be recognized based on the face feature value of the face to be recognized, where the strange type identifier is used to inform the server that the face to be recognized is a face of a stranger that is not recognized by the edge gateway. It can be understood that the server performs the second recognition processing on the face to be recognized, or the server loads each class center feature value from the database, where the class center feature value is loaded from the database, or the class center feature value is loaded from the database when the server is powered on, and then calculates the similarity between each class center feature value and the face feature value of the face to be recognized, and the specific similarity calculation method and the specific similarity determination method may be referred to as S102 in fig. 2, and are not described herein again.
S306, if the first recognition result indicates that a second target class center characteristic value matched with the face characteristic value exists in the various class center characteristic values, determining that the face identification of the face to be recognized is a target class identification corresponding to the second target class center characteristic value.
Specifically, according to S303, if the first recognition result indicates that the edge gateway recognizes the face to be recognized, that is, a second target class center feature value whose similarity with the face feature value of the face to be recognized is greater than the similarity threshold exists in each class center feature value pre-stored by the edge gateway. Then, the edge gateway identifies the face to be identified as the face of the user with known identity information, and the edge gateway can determine the target class identifier corresponding to the second target class center feature value, so that the edge gateway can use the target class identifier as the face class identifier of the face to be identified.
S307, the face characteristic value and the target class identification are sent to a server, so that the server updates each class center characteristic value loaded in a database based on the face characteristic value.
Specifically, after determining that the face type identifier of the face to be recognized is the target type identifier, the edge gateway may send the face feature value of the face to be recognized and the target type identifier to the server, so that the server may update each class center feature value loaded in the database based on the face feature value of the face to be recognized. The edge gateway identifies the face to be identified, and indicates that the database of the server stores the class center characteristic value corresponding to the face to be identified, so that the server can update the class center characteristic value corresponding to the face to be identified in the database according to the face characteristic value of the face to be identified.
When the scheme of the embodiment of the application is executed, the edge gateway firstly receives a target face characteristic value and a target class identification of at least one appointed face sent by a server, calculates each class center characteristic value corresponding to the target class identification based on the target face characteristic value respectively, then receives a first video file sent by a camera, extracts the face characteristic value of the face to be recognized in the first video file, and carries out first recognition processing on the face to be recognized based on the face characteristic value and each class center characteristic value stored in advance to obtain a first recognition result. The edge gateway realizes face recognition by comparing the pre-stored central characteristic values of various classes with the face characteristic value of the face to be recognized. Different from the prior art, the server loads the full amount of face characteristic values for comparison, so that on one hand, the transmission path of a video can be shortened, and on the other hand, the face recognition efficiency can be improved on the basis of realizing face recognition. If the first recognition result indicates that a first target class central characteristic value matched with the face characteristic value does not exist in the class central characteristic values, generating a strange class identifier, determining that the face class identifier of the face to be recognized is the strange class identifier, and sending the face characteristic value and the strange class identifier to a server by an edge gateway so that the server carries out second recognition processing on the face to be recognized based on the face characteristic value and each class central characteristic value loaded from a database. If the first recognition result indicates that a second target class center characteristic value matched with the face characteristic value exists in the class center characteristic values, determining that the face class identification of the face to be recognized is a target class identification corresponding to the second target class center characteristic value, and sending the face characteristic value and the target class identification to a server by an edge gateway so that the server updates the class center characteristic values loaded in a database based on the face characteristic value. After the edge gateway identifies the face to be identified for the first time, the edge gateway sends the result of the first identification to the server, and the server not only needs to update various types of central characteristic values loaded in the database according to the face characteristic value of the face to be identified by the edge gateway, but also needs to identify the face to be identified for the second time according to the face characteristic value of the face to be identified which is not identified by the edge gateway, so that the accuracy of face identification can be improved.
Fig. 4 is a schematic diagram of a system architecture of a face recognition method according to an embodiment of the present disclosure. As shown in fig. 4, it includes a camera 401, an edge gateway 402, a server 403, and a terminal 404.
It should be understood that the server 403 may be an electronic device with a face recognition function, or may be a separate server device, such as: rack-mounted, blade, tower-type or cabinet-type server equipment, or hardware equipment with higher computing power such as workstations and large computers; the service system may also be a server cluster composed of a plurality of servers, each server in the service cluster may be composed in a symmetric manner, where each server has equivalent functions and positions in a service link, and each server may provide services to the outside independently, where the independent provision of services may be understood as no assistance from another server.
The terminal 404 may be an electronic device with communication functions, including but not limited to: wearable devices, handheld devices, personal computers, tablet computers, in-vehicle devices, smart phones, computing devices or other processing devices connected to a wireless modem, and the like. The terminal devices in different networks may be called different names, for example: user equipment, access terminal, subscriber unit, subscriber station, mobile station, remote terminal, mobile device, user terminal, wireless communication device, user agent or user equipment, cellular telephone, cordless telephone, personal Digital Assistant (PDA), terminal equipment in a 5G network or future evolution network, and the like.
The camera 401 and the edge gateway 402, the edge gateway 402 and the server 403, and the server 403 and the terminal 404 are in interactive communication via a network, which may be a wireless network including but not limited to a cellular network, a wireless local area network, an infrared network or a bluetooth network, or a wired network including but not limited to an ethernet, a Universal Serial Bus (USB) or a controller area network.
In some embodiments, data exchanged over a network (e.g., target compressed packets) is represented using techniques and/or formats including hypertext Markup Language (HTML), extensible Markup Language (XML), and the like. All or some of the links may also be encrypted using conventional encryption techniques such as Secure Socket Layer (SSL), transport Layer Security (TLS), virtual Private Network (VPN), internet Protocol Security (IPsec). In other embodiments, custom and/or dedicated data communication techniques may also be used in place of, or in addition to, the data communication techniques described above.
According to some embodiments, a video is collected by a camera and sent to an edge gateway, the edge gateway identifies faces to be identified in the video, when the faces to be identified are not identified by the edge gateway, the edge gateway sends face feature values of the faces to be identified to a server, the server loads class center feature values of various faces stored in a database, the class center feature values are center values of face feature values corresponding to various classes of faces, and the server performs clustering processing based on the face feature values of the faces to be identified and the class center feature values of the various faces. In addition, the server also sends the face recognition result to the terminal so that the terminal can display the face recognition result.
The face recognition method provided by the present system architecture is generally executed by the server 403, and accordingly, the face recognition apparatus is generally disposed in the server 403, but the present application is not limited thereto.
Please refer to fig. 5, which is a flowchart illustrating a face recognition method according to an embodiment of the present disclosure. As shown in fig. 5, the method according to the embodiment of the present application may rely on the system architecture shown in fig. 4, and may include the following steps, which are described only by taking an execution subject of each step as a server:
s501, receiving a face characteristic value of a face to be recognized and a face type identification of the face to be recognized, which are sent by an edge gateway.
Specifically, after the edge gateway identifies the face to be identified for one time, the edge gateway sends the face feature value of the face to be identified and the face type identifier of the face to be identified to the server. The server receives the face characteristic value of the face to be recognized and the face type identification of the face to be recognized. It can be understood that, if the edge gateway recognizes the face to be recognized, the face identification of the face to be recognized is the target identification, and the target identification is the user identification of the known identity information stored in the database by the server; if the edge gateway does not recognize the face to be recognized, the face type identification of the face to be recognized is a strange type identification, and the strange type identification is generated when the edge gateway does not recognize the face to be recognized.
And S502, if the face type identification indicates that the face to be recognized is a strange user, performing second recognition processing on the face characteristic value and each type of central characteristic value loaded from a database, and updating each type of central characteristic value based on a second recognition result, wherein the type of central characteristic value is the central value of all face characteristic values of each designated face.
Specifically, if the face type identifier is a strange type identifier, it indicates that the edge gateway does not recognize the face to be recognized, and the face to be recognized is currently recognized as a strange user. Therefore, the server may perform secondary recognition on the face to be recognized, which may be understood as that the server performs feature comparison processing on each class center feature value loaded in the database and the face feature value of the face to be recognized, that is, similarity between each class center feature value and the face feature value to be recognized may be calculated, and the similarity calculation method may refer to S101 in fig. 1, which is not described herein again. If the similarity greater than the similarity threshold exists in each class center feature value, it indicates that the server recognizes the face to be recognized, and it indicates that the class center feature value corresponding to the face to be recognized exists in each class center feature value loaded in the server database, then the server may perform an update process on the class center feature value corresponding to the face to be recognized in the database according to the face feature value of the face to be recognized, it may be understood that the server may obtain all face feature values corresponding to the class center feature values corresponding to the existing face to be recognized in the database, may calculate center vectors of the existing face feature values and the face feature values of the face to be recognized, and take the center vectors as the class center feature value after the face to be recognized is updated in the database.
It should be noted that, the reason why the edge gateway does not recognize the face to be recognized and the server recognizes the face to be recognized may be that there are multiple edge gateways and the server may send the face feature values and the target class identifiers of different specified faces to different edge gateways for storage. Therefore, when the edge gateway does not recognize the face to be recognized, the server is also required to perform secondary recognition processing on the face to be recognized, and the user identity of the face to be recognized is determined, so that the accuracy of face recognition is improved.
When the method is executed, a server receives a face characteristic value of a face to be recognized and a face type identification of the face to be recognized, which are sent by an edge gateway, if the face type identification indicates that the face to be recognized is a strange user, second recognition processing is carried out on the face characteristic value and various class center characteristic values loaded from a database, the various class center characteristic values are updated based on a second recognition result, and the class center characteristic values are the center values of all face characteristic values of each specified face. By the method, the server receives a result of primary recognition of the face to be recognized by the edge gateway, and if the result of the primary recognition indicates that the face to be recognized is not recognized by the edge gateway, the server can perform secondary recognition on the face to be recognized and determine the user identity of the face to be recognized. Whether the edge gateway identifies the face to be identified for the first time or the server identifies the face to be identified for the second time, the identification is realized by comparing the class center characteristic value and the face characteristic value, and the method is different from the prior art that the server loads the full amount of face characteristic values to perform the characteristic comparison processing. Therefore, on the basis of realizing face recognition, the face recognition efficiency can be improved, and the accuracy of face recognition can also be improved.
Fig. 6 is a schematic flow chart of a face recognition method according to an embodiment of the present application. As shown in fig. 6, the method according to the embodiment of the present application may rely on the system architecture shown in fig. 4, and may include the following steps, which are described only by taking an execution subject of each step as a server:
s601, at least one face photo and/or second video file uploaded by a terminal are obtained, target face characteristic values of designated faces in each face photo and/or second video file are respectively extracted, and target class identifications of the designated faces are obtained.
Specifically, a user selects at least one face photo or video file of at least one designated face at a terminal, the at least one face photo or video file of the at least one designated face at the terminal is uploaded to a server, and after the server receives the face photo or video file, a target face characteristic value of each face photo can be extracted through a characteristic extractor, or the face photo is extracted from the video file first, and then the target face characteristic value of each face photo is extracted through the characteristic extractor. When uploading the face photos or the video files to the server, the terminal also generates a target type identifier for each designated face, and the target type identifier is used for identifying different designated faces. When the server extracts all the target face feature values of each designated face, the server can also acquire the target class identification of each designated face. For example, the application scenario of the embodiment of the application is home access control verification, and the designated faces may include faces of designated user identities, such as dad, mom, sister, brother, friend a, friend B, and the like. The terminal generates class identifications of dad, mom, sister, brother, friend A and friend B which are 000012, 000013, 000014, 000015, 000016 and 000017 respectively, and the target class identification of each designated face is unique.
S602, sending each target face characteristic value and the target type identification to an edge gateway, so that the edge gateway performs first recognition processing on a face to be recognized based on each target face characteristic value and the target type identification.
Specifically, the server may send a target face feature value and a target class identifier of each designated face to the edge gateway, where the target face feature value is used for the edge gateway to perform first identification processing on the face to be identified, the edge gateway first calculates a class center feature value of each designated face according to the target face feature value, stores each class center feature value and the target class identifier corresponding to each class center feature value locally, calculates a similarity between each class center feature value and the face feature value of the face to be identified, and determines the similarity and a similarity threshold to implement the identification process. If the class center characteristic value with the similarity larger than the similarity threshold value exists, the edge gateway identifies the face to be identified, and then the target class identification corresponding to the class center characteristic value can be used as the face class identification of the face to be identified. If the class center characteristic value with the similarity larger than the similarity threshold value does not exist, the fact that the edge gateway does not recognize the face to be recognized means that the edge gateway does not recognize the face to be recognized, the edge gateway can generate a strange class identification for the face to be recognized, and the strange class identification is used as the face class identification of the face to be recognized.
S603, calculating a target class central characteristic value corresponding to the target face characteristic value, and writing the target class identification, the target face characteristic value and the target class central characteristic value into a database.
Specifically, according to the target face feature value extracted in S601, the server may write the target face feature value, the target class identifier, and the target class center feature value of each designated face into the database. And when storing the target face characteristic value, the target class identification and the target class central characteristic value of each designated face, the server stores the target face characteristic value, the target class identification and the target class central characteristic value in a form of a column database. A columnar database is a database that stores data in a column-associative storage architecture. Referring to the physical structure diagram of the column-wise database shown in fig. 7, there are three columns of data including face class identifiers, face feature values and class center feature values, where the first column represents the class identifier of each designated face, the second column represents the face feature values of each face photograph under each designated face, and the third column represents the class center feature values of each designated face. 000111 000112, 000113., 00011N respectively represent class identifiers of different designated faces; the (a 11, b 11), (a 12, b 12), and (a 13, b 13) respectively represent face feature values of a face photograph of a specified face identified by 000111. (m 1, n 1) represents the class center feature value of the specified face identified by 000111. The stored data of the class identifier 000112 and the class identifier 000113 in the database can be referred to as the class identifier 000011, and will not be described herein again.
And S604, receiving the face characteristic value of the face to be recognized and the face type identification of the face to be recognized, which are sent by the edge gateway.
S605, if the face type identification is a strange type identification, determining that the face to be recognized is a strange user, and sending first prompt information to the terminal, wherein the first prompt information indicates that the face to be recognized is the strange user.
S604 and S605 are explained below.
Specifically, after the edge gateway identifies the face to be identified once, the face characteristic value of the face to be identified and the face type identifier of the face to be identified are sent to the server, and the server receives the face characteristic value and the face type identifier. If the face type identification is a strange type identification, indicating that the identification edge gateway does not identify the face to be identified; if the face type identification is the target type identification, the identification edge gateway identifies the face to be identified. The server determines that the face to be recognized is a strange user, and can send a first prompt message to the terminal, wherein the first prompt message is used for indicating that the face to be recognized is the strange user. Referring to the terminal interface diagram shown in fig. 8, 810 indicates a first prompting message, and the content of the first prompting message is "xx is recognized to home".
And S606, respectively calculating the similarity between the face characteristic value and each class center characteristic value.
Specifically, when the face to be recognized is a strange user, the server may further perform secondary recognition on the face to be recognized, and then the process of performing secondary recognition by the server may be understood as calculating similarity between the class center feature value of the face to be recognized and each class center feature value loaded from the database, where each class center feature value loaded is each class center feature value loaded from the database when the server is started. Specifically, the similarity between the feature values is calculated, and because the feature values represent feature vectors, it can be understood that the distances between the face feature vector of the face to be recognized and the loaded class center feature vectors in the feature space are calculated, a distance threshold is preset, and if the class center feature vectors with the similarity to the face feature vector of the face to be recognized being greater than the distance threshold exist, the class center feature vectors are similar to the face feature vector of the face to be recognized; if the distance between the face feature vector of the face to be recognized and the class center feature vector is smaller than or equal to the distance threshold, it is indicated that the face feature vector of the face to be recognized is not similar to any class center feature vector in the database. For calculating the distance of two vectors in space, the euclidean distance or the cosine distance can be used for evaluation. In the embodiment of the present application, a cosine distance may be used for evaluation, and the closer the cosine value is to 1, the more similar the two vectors are. For example, the distance threshold is set to 0.9, and if the distance between the face feature vector and the class center feature vector is greater than 0.9, the face feature vector is similar to the class center feature vector; and if the distance between the face feature vector and the class center feature vector is less than or equal to 0.9, the face feature vector is not similar to the class center feature vector.
And S607, if a first similarity greater than a similarity threshold exists in the similarities, updating a class center feature value corresponding to the first similarity based on the face feature value, and writing the strange class identifier and the face feature value into the database.
Specifically, if the similarity has a first similarity greater than the similarity threshold, determining a class identifier to which a class center feature value corresponding to the first similarity belongs, acquiring all face feature values under the class identifier from a database, calculating a center value between the face feature value of the face to be recognized and all face feature values under the class identifier, and taking the center value as a new class center feature value corresponding to the class identifier, that is, replacing the class center feature value corresponding to the first similarity with the new class center feature value. In addition, the server may also write a new row of data into the column-type database shown in fig. 7, where the strange class identifier generated by the edge gateway is used as the class identifier in the first column, the face feature value of the face to be recognized extracted by the edge gateway is used as the face feature value in the second column, and the new class-center feature value is used as the class-center feature value in the third column. It can be understood that, if the edge gateway extracts a plurality of face feature values of the face to be recognized, data of corresponding rows are written in the column-type database, a first column of data in each row of data is a strange type identifier, a second column of data in different rows of data is different face feature values, and a third column of data in each row of data is a new class center feature value.
S608, if the second similarity larger than the similarity threshold does not exist in the similarities, writing the strange type identification and the face feature value into the database.
Specifically, if the second similarity greater than the similarity threshold does not exist in the similarities, it is determined that the face to be recognized is indeed a stranger, the server may calculate a class center feature value of the face to be recognized, the server may write a new row of data in the column database shown in fig. 7, the stranger class identifier generated by the edge gateway serves as the class identifier in the first column, the face feature value of the face to be recognized extracted by the edge gateway serves as the face feature value in the second column, and the class center feature value serves as the class center feature value in the third column. It can be understood that, if the edge gateway extracts a plurality of face feature values of the face to be recognized, data of corresponding rows are written into the column-type database, a first column of data in each row of data is a strange type identifier, a second column of data in different rows of data is different face feature values, and a third column of data in each row of data is a class center feature value. If only one face characteristic value of the face to be recognized is available, the face characteristic value can be used as a class center characteristic value of the face to be recognized; if there are a plurality of face feature values of the face to be recognized, the center values of the plurality of face feature values in the feature space may be calculated, and the center values are used as the class center feature values of the face to be recognized.
S609, if the face type identification is matched with the target type identification, sending second prompt information to the terminal, wherein the second prompt information indicates that the face to be recognized is a familiar user, and updating a target type center characteristic value corresponding to the face type identification in the database based on the target type identification.
Specifically, if the face identification matches the target identification, it indicates that the identification edge gateway identifies the face to be identified. The server determines that the face to be recognized is a familiar user, and can send a second prompt message to the terminal, wherein the second prompt message is used for indicating that the face to be recognized is a familiar user. Referring to the schematic terminal interface shown in fig. 8, 810 shows a second prompting message, and the content of the second prompting message is "xx home is identified".
When the method is executed, a server acquires at least one face photo and/or second video file uploaded by a terminal, extracts each face photo and/or a target face characteristic value of a designated face in the second video file respectively, acquires each target type identification of the designated face, and sends each target face characteristic value and the target type identification to an edge gateway, so that the edge gateway performs first recognition processing on the face to be recognized based on each target face characteristic value and the target type identification. The server also receives a first recognition result of the face to be recognized by the edge gateway, and when the face to be recognized is not recognized by the edge gateway, the server performs feature comparison processing on the basis of the central feature values of various classes loaded in the database and the face feature value of the face to be recognized, so that secondary recognition of the face to be recognized is realized. Whether the edge gateway identifies the face to be identified for the first time or the server identifies the face to be identified for the second time, the identification is realized by comparing the class center characteristic value and the face characteristic value, and the method is different from the prior art that the server loads the full amount of face characteristic values to perform the characteristic comparison processing. Therefore, on the basis of realizing face recognition, the face recognition efficiency can be improved, and the accuracy of face recognition can also be improved. In addition, the server also sends a corresponding prompt message to the terminal according to the first recognition result of the face to be recognized of the edge gateway so as to inform the user whether the face to be recognized is a familiar user or a strange user. And the server also performs updating processing on each class center characteristic value in the database based on the face characteristic value to be recognized, so as to perfect the data stored in the database.
Fig. 9 is a schematic structural diagram of a face recognition device according to an embodiment of the present application. The face recognition apparatus 900 may be implemented as all or a portion of a server by software, hardware, or a combination thereof. The apparatus 900 comprises:
the video receiving module 910 is configured to receive a first video file sent by a camera, and extract a face feature value of a face to be recognized in the first video file;
the face recognition module 920 is configured to perform a first recognition process on the face to be recognized based on the face feature value and each class center feature value stored in advance to obtain a first recognition result, where the class center feature value is a center value of all face feature values of each designated face.
Optionally, the apparatus 900 further includes:
and the recognition result sending module is used for acquiring the face identification of the face to be recognized based on the first recognition result and sending the face characteristic value and the face identification to a server so that the server carries out second recognition processing on the face to be recognized based on the face identification, the face characteristic value and each class center characteristic value loaded from a database.
Optionally, the apparatus 900 further includes:
the data receiving module is used for receiving a target face characteristic value and a target class identification of at least one designated face sent by the server;
and respectively calculating class center characteristic values corresponding to the target class identifications on the basis of the target face characteristic values.
Optionally, the identification result sending module includes:
a first result unit, configured to generate a strange type identifier if the first recognition result indicates that a first target class central feature value matching the face feature value does not exist in the various class central feature values, and determine that the face type identifier of the face to be recognized is the strange type identifier;
and a second result unit, configured to determine that the face identifier of the face to be recognized is a target identifier corresponding to a second target class central feature value if the first recognition result indicates that a second class target central feature value matching the face feature value exists in the various class central feature values.
Optionally, the identification result sending module includes:
and the first sending unit is used for sending the face feature value and the strange class identification to a server if the first recognition result indicates that the first target class center feature value matched with the face feature value does not exist in the class center feature values, so that the server carries out second recognition processing on the face to be recognized based on the face feature value and the class center feature values loaded from a database.
Optionally, the identification result sending module further includes:
a second sending unit, configured to send the face feature value and the target class identifier to a server if the first recognition result indicates that a second target class center feature value matching the face feature value exists in the class center feature values, so that the server updates the class center feature values loaded in the database based on the face feature value.
When the scheme of the embodiment of the application is executed, the edge gateway receives a first video file sent by a camera, extracts the face characteristic value of a face to be recognized in the first video file, and performs first recognition processing on the face to be recognized based on the face characteristic value and various types of central characteristic values stored in advance to obtain a first recognition result, wherein the type of central characteristic values are central values of all face characteristic values of each designated face. The face to be recognized is recognized through the edge gateway, and the edge gateway can recognize the face only by comparing the pre-stored center characteristic values of various classes with the face characteristic value of the face to be recognized. Compared with the prior art that the server loads all face characteristic values, the loaded data size is small, and the face recognition efficiency can be improved on the basis of realizing face recognition.
Please refer to fig. 10, which is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present disclosure. The face recognition apparatus 1000 may be implemented as all or a portion of a server by software, hardware, or a combination of both. The apparatus 1000 comprises:
the first receiving module 1010 is configured to receive a face feature value of a face to be recognized and a face type identifier of the face to be recognized, where the face feature value is sent by an edge gateway;
a second recognition module 1020, configured to perform second recognition processing on the face feature value and each class center feature value loaded from the database if the face class identifier indicates that the face to be recognized is a strange user, and update each class center feature value based on a second recognition result, where the class center feature value is a center value of all face feature values of each designated face.
Optionally, the apparatus 1000 further comprises:
the first acquisition module is used for acquiring at least one face photo and/or second video file uploaded by a terminal, respectively extracting a target face characteristic value of a designated face in each face photo and/or second video file, and acquiring a target class identifier of each designated face;
the second sending module is used for sending each target face characteristic value and the target class identification to an edge gateway so that the edge gateway performs first recognition processing on a face to be recognized based on each target face characteristic value and the target class identification;
and the third calculation module is used for calculating a target class central characteristic value corresponding to the target face characteristic value and writing the target class identification, the target face characteristic value and the target class central characteristic value into a database.
Optionally, the second identification module 1020 includes:
and the first recognition unit is used for determining that the face to be recognized is a strange user if the face type identification is not matched with the target type identification, performing second recognition processing on the face characteristic value and each type of central characteristic value loaded from a database, and updating each type of central characteristic value based on a second recognition result.
Optionally, the second identification module 1020 further includes:
and the first reminding unit is used for sending first reminding information to the terminal if the face type identification is not matched with the target type identification, and the first reminding information indicates that the face to be recognized is a strange user.
Optionally, the second identification module 1020 includes:
the first calculating unit is used for respectively calculating the similarity between the face characteristic value and each class center characteristic value;
the second calculation unit is used for updating a class center characteristic value corresponding to the first similarity based on the face characteristic value if the first similarity larger than a similarity threshold exists in the similarities, and writing the unfamiliar class identifier and the face characteristic value into the database;
and the third calculating unit is used for writing the strange identification and the face feature value into the database if a second similarity larger than the similarity threshold value does not exist in the similarities.
Optionally, the apparatus 1000 further comprises:
and the first reminding unit is used for sending second reminding information to the terminal if the face identification is matched with the target identification, wherein the second reminding information indicates that the face to be recognized is a familiar user.
Optionally, the apparatus 1000 further comprises:
and the updating unit is used for updating the target class center characteristic value corresponding to the face class identification in the database based on the target class identification if the face class identification is matched with the target class identification.
When the method is executed, a server receives a face characteristic value of a face to be recognized and a face type identification of the face to be recognized, which are sent by an edge gateway, if the face type identification indicates that the face to be recognized is a strange user, second recognition processing is carried out on the face characteristic value and various class center characteristic values loaded from a database, the various class center characteristic values are updated based on a second recognition result, and the class center characteristic values are the center values of all face characteristic values of each specified face. According to the method, the server receives the result of primary recognition of the face to be recognized by the edge gateway, and if the result of the primary recognition indicates that the face to be recognized is not recognized by the edge gateway, the server can perform secondary recognition on the face to be recognized, and the user identity of the face to be recognized is determined. Whether the edge gateway identifies the face to be identified for the first time or the server identifies the face to be identified for the second time, the identification is realized by comparing the class center characteristic value and the face characteristic value, and the method is different from the prior art that the server loads the full amount of face characteristic values to perform the characteristic comparison processing. Therefore, on the basis of realizing face recognition, the face recognition efficiency can be improved, and the accuracy of face recognition can also be improved.
Please refer to fig. 11, which provides a schematic structural diagram of an edge gateway according to an embodiment of the present application. As shown in fig. 11, the edge gateway 1000 may include: at least one processor 1001, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002.
The communication bus 1002 is used to implement connection communication among these components.
The user interface 1003 may include a standard wired interface and a wireless interface.
The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Processor 1001 may include one or more processing cores, among other things. The processor 1001, which is connected to various parts throughout the server 1000 using various interfaces and lines, performs various functions of the server 1000 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1005 and calling data stored in the memory 1005.
Alternatively, the processor 1001 may be implemented in at least one hardware form of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1001 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like.
Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications.
It is understood that the above modem may not be integrated into the processor 1001, and may be implemented by a single chip.
The Memory 1005 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1005 includes a non-transitory computer-readable medium. The memory 1005 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1005 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 1005 may optionally be at least one memory device located remotely from the processor 1001.
As shown in fig. 11, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a face recognition program.
In the electronic device 1000 shown in fig. 11, the user interface 1003 is mainly used as an interface for providing input for a user, and acquiring data input by the user; the processor 1001 may be configured to call the face recognition program stored in the memory 1005, and specifically perform the following operations:
receiving a first video file sent by a camera, and extracting a face characteristic value of a face to be recognized in the first video file;
and performing first recognition processing on the face to be recognized based on the face characteristic values and various pre-stored class central characteristic values to obtain a first recognition result, wherein the class central characteristic values are the central values of all face characteristic values of each specified face.
In one embodiment, the processor 1001 also performs the following operations:
and acquiring the face type identification of the face to be recognized based on the first recognition result, and sending the face characteristic value and the face type identification to a server so that the server carries out second recognition processing on the face to be recognized based on the face type identification, the face characteristic value and each class center characteristic value loaded from a database.
In one embodiment, before executing the video file sent by the receiving camera and extracting the face feature value of the face to be recognized in the video file, the processor 1001 further executes the following operations:
receiving a target face characteristic value and a target class identification of at least one designated face sent by a server;
and respectively calculating class center characteristic values corresponding to the target class identifications based on the target face characteristic values.
In an embodiment, when the processor 1001 performs the step of obtaining the face class identifier of the face to be recognized based on the first recognition result, the following operations are specifically performed:
if the first recognition result indicates that a first target class central characteristic value matched with the face characteristic value does not exist in the various class central characteristic values, generating a strange class identification, and determining the face class identification of the face to be recognized as the strange class identification;
and if the first recognition result indicates that a second type target central characteristic value matched with the face characteristic value exists in the various types of central characteristic values, determining that the face identification of the face to be recognized is a target identification corresponding to the second target type central characteristic value.
In an embodiment, when the step of sending the face feature value and the face class identifier to a server to enable the server to perform a second recognition process on the face to be recognized based on the face class identifier, the face feature value, and each class center feature value loaded from a database is performed, the processor 1001 specifically performs the following operations:
and if the first recognition result indicates that a first target class central characteristic value matched with the face characteristic value does not exist in the class central characteristic values, sending the face characteristic value and the strange class identification to a server, so that the server performs second recognition processing on the face to be recognized based on the face characteristic value and the class central characteristic values loaded from a database.
In one embodiment, the processor 1001 also performs the following operations:
and if the first recognition result indicates that a second target class center characteristic value matched with the face characteristic value exists in the class center characteristic values, sending the face characteristic value and the target class identification to a server, so that the server updates the loaded class center characteristic values in a database based on the face characteristic value.
Please refer to fig. 12, which provides a schematic structural diagram of a server according to an embodiment of the present application. As shown in fig. 12, the electronic device 1000 may include: at least one processor 1001, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002.
The communication bus 1002 is used to implement connection communication among these components.
The user interface 1003 may include a standard wired interface and a wireless interface.
The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Processor 1001 may include one or more processing cores, among other things. The processor 1001, which is connected to various parts throughout the server 1000 using various interfaces and lines, performs various functions of the server 1000 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1005 and calling data stored in the memory 1005.
Alternatively, the processor 1001 may be implemented in at least one hardware form of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1001 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like.
Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications.
It is understood that the modem may not be integrated into the processor 1001, but may be implemented by a single chip.
The Memory 1005 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1005 includes a non-transitory computer-readable medium. The memory 1005 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1005 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 1005 may alternatively be at least one memory device located remotely from the processor 1001.
As shown in fig. 12, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a face recognition program.
In the electronic device 1000 shown in fig. 12, the user interface 1003 is mainly used as an interface for providing input for a user, and acquiring data input by the user; the processor 1001 may be configured to call the face recognition program stored in the memory 1005, and specifically perform the following operations:
receiving a face characteristic value of a face to be recognized and a face type identification of the face to be recognized, which are sent by an edge gateway;
and if the face type identification indicates that the face to be recognized is a strange user, performing second recognition processing on the face characteristic value and each type of central characteristic value loaded from the database, and updating each type of central characteristic value based on a second recognition result, wherein the type of central characteristic value is the central value of all face characteristic values of each designated face.
In one embodiment, before executing the face feature value of the face to be recognized sent by the receiving edge gateway and the face class identifier of the face to be recognized, the processor 1001 further executes the following operations:
acquiring at least one face photo and/or second video file uploaded by a terminal, respectively extracting a target face characteristic value of a designated face in each face photo and/or second video file, and acquiring a target class identifier of each designated face;
sending each target face characteristic value and the target class identification to an edge gateway so that the edge gateway performs first recognition processing on a face to be recognized based on each target face characteristic value and the target class identification;
and calculating a target class central characteristic value corresponding to the target face characteristic value, and writing the target class identification, the target face characteristic value and the target class central characteristic value into a database.
In an embodiment, when the processor 1001 executes the steps of performing the second recognition processing on the face feature value and each class center feature value loaded from the database if the face class identifier indicates that the face to be recognized is a strange user, and updating each class center feature value based on a second recognition result, specifically executing the following operations:
if the face type identification is not matched with the target type identification, determining that the face to be recognized is a strange user, wherein the face type identification is a strange type identification, performing second recognition processing on the face characteristic value and each type of central characteristic value loaded from a database, and updating each type of central characteristic value based on a second recognition result.
In one embodiment, the processor 1001 also performs the following operations:
and if the face type identification is not matched with the target type identification, sending first prompt information to the terminal, wherein the first prompt information indicates that the face to be recognized is a strange user.
In an embodiment, when the processor 1001 performs the step of performing the second recognition processing on the face feature value and each class center feature value loaded from the database, and updating each class center feature value based on a second recognition result, the following operations are specifically performed:
respectively calculating the similarity between the face characteristic value and each class center characteristic value;
if the similarity has a first similarity larger than a similarity threshold value, updating a class center characteristic value corresponding to the first similarity based on the face characteristic value, and writing the strange class identification and the face characteristic value into the database;
and if the similarity does not have a second similarity larger than the similarity threshold, writing the strange identification and the face characteristic value into the database.
In one embodiment, the processor 1001 also performs the following operations:
and if the face identification is matched with the target identification, sending second prompt information to the terminal, wherein the second prompt information indicates that the face to be recognized is a familiar user.
In one embodiment, the processor 1001 also performs the following operations: and if the face identification is matched with the target identification, updating a target class center characteristic value corresponding to the face identification in the database based on the target identification.
It is clear to a person skilled in the art that the solution of the present application can be implemented by means of software and/or hardware. The "unit" and "module" in this specification refer to software and/or hardware that can perform a specific function independently or in cooperation with other components, where the hardware may be, for example, a Field-ProgrammaBLE Gate Array (FPGA), an Integrated Circuit (IC), or the like.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art will recognize that the embodiments described in this specification are preferred embodiments and that acts or modules referred to are not necessarily required for this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some service interfaces, indirect coupling or communication connection of devices or units, and may be electrical or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented as a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program, which is stored in a computer-readable memory, and the memory may include: flash disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
The above description is merely an exemplary embodiment of the present disclosure, and the scope of the present disclosure is not limited thereto. That is, all equivalent changes and modifications made in accordance with the teachings of the present disclosure are intended to be included within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (18)

1. A face recognition method is applied to an edge gateway, and the method comprises the following steps:
receiving a first video file sent by a camera, and extracting a face characteristic value of a face to be recognized in the first video file;
and performing first recognition processing on the face to be recognized based on the face characteristic values and various pre-stored class central characteristic values to obtain a first recognition result, wherein the class central characteristic values are the central values of all face characteristic values of each specified face.
2. The method of claim 1, further comprising:
and acquiring a face type identification of the face to be recognized based on the first recognition result, and sending the face characteristic value and the face type identification to a server so as to enable the server to perform second recognition processing on the face to be recognized based on the face type identification, the face characteristic value and each class center characteristic value loaded from a database.
3. The method according to claim 1, wherein before the receiving the video file sent by the camera and extracting the face feature value of the face to be recognized in the video file, the method further comprises:
receiving a target face characteristic value and a target class identification of at least one designated face sent by a server;
and respectively calculating class center characteristic values corresponding to the target class identifications on the basis of the target face characteristic values.
4. The method according to claim 2, wherein the obtaining the face class identifier of the face to be recognized based on the first recognition result comprises:
if the first recognition result indicates that a first target class central characteristic value matched with the face characteristic value does not exist in the various class central characteristic values, generating a strange class identifier, and determining the face class identifier of the face to be recognized as the strange class identifier;
and if the first recognition result indicates that a second type target central characteristic value matched with the face characteristic value exists in the various types of central characteristic values, determining the face type identification of the face to be recognized as a target type identification corresponding to the second target type central characteristic value.
5. The method according to claim 4, wherein the sending the face feature value and the face class identifier to a server to enable the server to perform a second recognition process on the face to be recognized based on the face class identifier, the face feature value and each class center feature value loaded from a database comprises:
and if the first recognition result indicates that a first target class central characteristic value matched with the face characteristic value does not exist in the class central characteristic values, sending the face characteristic value and the strange class identification to a server, so that the server performs second recognition processing on the face to be recognized based on the face characteristic value and the class central characteristic values loaded from a database.
6. The method of claim 4, further comprising:
and if the first recognition result indicates that a second target class center characteristic value matched with the face characteristic value exists in the class center characteristic values, the face characteristic value and the target class identification are sent to a server, so that the server updates the class center characteristic values loaded in a database based on the face characteristic value.
7. A face recognition method is applied to a server, and the method comprises the following steps:
receiving a face characteristic value of a face to be recognized and a face type identification of the face to be recognized, which are sent by an edge gateway;
and if the face type identification indicates that the face to be recognized is a strange user, performing second recognition processing on the face characteristic value and each type of central characteristic value loaded from the database, and updating each type of central characteristic value based on a second recognition result, wherein the type of central characteristic value is the central value of all face characteristic values of each designated face.
8. The method according to claim 7, wherein before receiving the face feature value of the face to be recognized and the face class identifier of the face to be recognized, which are sent by the edge gateway, the method further comprises:
acquiring at least one face photo and/or second video file uploaded by a terminal, respectively extracting a target face characteristic value of a designated face in each face photo and/or second video file, and acquiring a target class identifier of each designated face;
sending each target face characteristic value and the target type identification to an edge gateway so that the edge gateway performs first identification processing on a face to be identified based on each target face characteristic value and the target type identification;
and calculating a target class central characteristic value corresponding to the target face characteristic value, and writing the target class identification, the target face characteristic value and the target class central characteristic value into a database.
9. The method according to claim 8, wherein if the face type identifier indicates that the face to be recognized is a strange user, performing a second recognition process on the face feature value and each class center feature value loaded from a database, and updating each class center feature value based on a second recognition result includes:
if the face type identification is not matched with the target type identification, determining that the face to be recognized is a strange user, wherein the face type identification is a strange type identification, performing second recognition processing on the face characteristic value and each type of central characteristic value loaded from a database, and updating each type of central characteristic value based on a second recognition result.
10. The method of claim 9, further comprising:
and if the face type identification is not matched with the target type identification, sending first prompt information to the terminal, wherein the first prompt information indicates that the face to be recognized is a strange user.
11. The method according to claim 9, wherein performing a second recognition process on the face feature value and each class center feature value loaded from the database, and updating each class center feature value based on a second recognition result includes:
respectively calculating the similarity between the face characteristic value and each class center characteristic value;
if the similarity has a first similarity larger than a similarity threshold value, updating a class center characteristic value corresponding to the first similarity based on the face characteristic value, and writing the strange class identification and the face characteristic value into the database;
and if the similarity does not have a second similarity larger than the similarity threshold, writing the strange identification and the face characteristic value into the database.
12. The method of claim 9, further comprising:
and if the face identification is matched with the target identification, sending second prompt information to the terminal, wherein the second prompt information indicates that the face to be recognized is a familiar user.
13. The method of claim 9, further comprising:
and if the face identification is matched with the target identification, updating a target class center characteristic value corresponding to the face identification in the database based on the target identification.
14. The method of any one of claims 7-13, wherein the data in the database is stored in a columnar format comprising at least three columns of data including an identification of a class to which the face belongs, a face feature value, and a class center feature value.
15. A face recognition apparatus, applied to an edge gateway, the apparatus comprising:
the video receiving module is used for receiving a first video file sent by a camera and extracting a face characteristic value of a face to be recognized in the first video file;
and the face recognition module is used for carrying out first recognition processing on the face to be recognized based on the face characteristic value and each class central characteristic value which is stored in advance to obtain a first recognition result, wherein the class central characteristic value is the central value of all face characteristic values of each appointed face.
16. A face recognition device applied to a server, the face recognition device comprising:
the first receiving module is used for receiving the face characteristic value of the face to be recognized and the face type identification of the face to be recognized, which are sent by the edge gateway;
and the second recognition module is used for performing second recognition processing on the face characteristic value and each class central characteristic value loaded from the database if the face class identification indicates that the face to be recognized is a strange user, and updating each class central characteristic value based on a second recognition result, wherein the class central characteristic value is the central value of all face characteristic values of each designated face.
17. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to perform the method steps according to any one of claims 1 to 6 or 7 to 14.
18. An electronic device, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps according to any of claims 1-6 or 7-14.
CN202110488352.6A 2021-04-30 2021-04-30 Face recognition method, face recognition device, storage medium and electronic equipment Pending CN115273168A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110488352.6A CN115273168A (en) 2021-04-30 2021-04-30 Face recognition method, face recognition device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110488352.6A CN115273168A (en) 2021-04-30 2021-04-30 Face recognition method, face recognition device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115273168A true CN115273168A (en) 2022-11-01

Family

ID=83744768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110488352.6A Pending CN115273168A (en) 2021-04-30 2021-04-30 Face recognition method, face recognition device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115273168A (en)

Similar Documents

Publication Publication Date Title
CN109993150B (en) Method and device for identifying age
CN111601115B (en) Video detection method, related device, equipment and storage medium
CN108540755B (en) Identity recognition method and device
KR102002024B1 (en) Method for processing labeling of object and object management server
CN110798718B (en) Video recommendation method and device
US11636710B2 (en) Methods and apparatus for reducing false positives in facial recognition
US11341351B2 (en) Methods and apparatus for facial recognition on a user device
JP6046501B2 (en) Feature point output device, feature point output program, feature point output method, search device, search program, and search method
CN109934142B (en) Method and apparatus for generating feature vectors of video
CN111368820A (en) Text labeling method and device and storage medium
US20230316529A1 (en) Image processing method and apparatus, device and storage medium
CN108681871B (en) Information prompting method, terminal equipment and computer readable storage medium
CN113822427A (en) Model training method, image matching device and storage medium
CN110825863B (en) Text pair fusion method and device
CN115202599A (en) Screen projection display method and related device
CN111126159A (en) Method, apparatus, electronic device, and medium for tracking pedestrian in real time
WO2019033518A1 (en) Information acquisition method and apparatus, computer readable storage medium, and terminal device
CN110619688A (en) Attendance statistical method and device and image acquisition equipment
CN107733874A (en) Information processing method, device, computer equipment and storage medium
CN113014960A (en) Method, device and storage medium for online video production
CN115273168A (en) Face recognition method, face recognition device, storage medium and electronic equipment
CN115983499A (en) Box office prediction method and device, electronic equipment and storage medium
CN115665363A (en) Video conference method, device, equipment and storage medium
CN111031354A (en) Multimedia playing method, device and storage medium
JP6482452B2 (en) Screen transition identification device, screen transition identification system, and screen transition identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination