CN110472537B - Self-adaptive identification method, device, equipment and medium - Google Patents

Self-adaptive identification method, device, equipment and medium Download PDF

Info

Publication number
CN110472537B
CN110472537B CN201910699845.7A CN201910699845A CN110472537B CN 110472537 B CN110472537 B CN 110472537B CN 201910699845 A CN201910699845 A CN 201910699845A CN 110472537 B CN110472537 B CN 110472537B
Authority
CN
China
Prior art keywords
face
face sample
feature
current
portrait
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910699845.7A
Other languages
Chinese (zh)
Other versions
CN110472537A (en
Inventor
赵伟
叶泽雄
段克晓
王松健
李火荣
肖万鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910699845.7A priority Critical patent/CN110472537B/en
Publication of CN110472537A publication Critical patent/CN110472537A/en
Application granted granted Critical
Publication of CN110472537B publication Critical patent/CN110472537B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a self-adaptive identification method, a device, equipment and a medium, wherein the method comprises the steps of constructing a face sample portrait set, wherein the face sample portrait set comprises a user identifier and a face sample portrait corresponding to the user identifier; acquiring a current user image; extracting current face features according to the current user image; inquiring a matched face feature set according to the current face feature in the face sample portrait set, wherein the dissimilarity degree between elements in the matched face feature set and the current face feature is smaller than a preset threshold; if the matched face feature set is not empty, determining a target user identification according to the matched face feature set; and updating the face sample portrait corresponding to the target user identifier according to the current face characteristics. According to the invention, each face sample portrait in the face sample portrait set has the opportunity to be automatically updated, and the high accuracy of the identification result is maintained under the condition that the storage burden is not additionally increased and the cooperation of users is not needed.

Description

Self-adaptive identification method, device, equipment and medium
Technical Field
The present invention relates to the field of automatic identification, and in particular, to a self-adaptive identification method, apparatus, device, and medium.
Background
Face recognition is an important biological feature recognition technology, and is a very active research hotspot in the fields of pattern recognition and computer vision. Compared with other biological recognition technologies such as fingerprints and irises, the method has the characteristics of being direct, friendly, convenient, quick, hidden in operation, non-invasive, strong in interactivity and the like, and has a very wide application prospect. With the increasing maturity of the face recognition technology, the face recognition technology is widely applied to various aspects of public security, banks, customs, airports, intelligent video monitoring, medical treatment and the like, and shows strong vitality.
In a specific implementation scenario, the prior art generally needs to register a plurality of face sample pictures for each user to be identified, so as to store sufficient face information to ensure identification accuracy. With the progress of time, the image of the user may also change, so that in order to maintain the accuracy of face recognition, the number of face sample pictures has to be further increased, and accordingly, the consumption of resources is generated, and the recognition speed is reduced.
Disclosure of Invention
In order to solve the technical problems that in the prior art, in order to maintain the identification accuracy, a plurality of face sample pictures are required to be recorded for a user, and the repeated recording of the face sample pictures causes the waste of storage resources, so that the identification speed is reduced, the burden of the user is increased, and the viscosity of the user is reduced, the embodiment of the invention provides a self-adaptive identification method, a device, equipment and a medium.
In one aspect, the present invention provides an adaptive recognition method, the method comprising:
constructing a face sample portrait set, wherein the face sample portrait set comprises a user identifier and a face sample portrait corresponding to the user identifier; each face sample representation includes at least one face feature;
acquiring a current user image;
extracting current face features according to the current user image;
inquiring a matched face feature set according to the current face feature in the face sample portrait set, wherein the dissimilarity degree between elements in the matched face feature set and the current face feature is smaller than a preset threshold;
if the matched face feature set is not empty, determining a target user identification according to the matched face feature set; and updating the face sample portrait corresponding to the target user identifier according to the current face characteristics.
In another aspect, the present invention provides an adaptive recognition apparatus, the apparatus comprising:
the face sample portrait set construction module is used for constructing a face sample portrait set, and the face sample portrait set comprises a user identifier and a face sample portrait corresponding to the user identifier; each face sample representation includes at least one face feature;
The current user image acquisition module is used for acquiring a current user image;
the current face feature extraction module is used for extracting current face features according to the current user image;
the matching module is used for inquiring a matching face feature set in the face sample portrait set according to the current face feature, and the dissimilarity degree between elements in the matching face feature set and the current face feature is smaller than a preset threshold;
the updating module is used for determining a target user identifier according to the matched face feature set if the matched face feature set is not empty; and updating the face sample portrait corresponding to the target user identifier according to the current face characteristics.
In another aspect, the invention provides an apparatus characterized in that the apparatus comprises a processor and a memory, wherein at least one instruction, at least one program, code set, or instruction set is stored in the memory, and the at least one instruction, the at least one program, the code set, or instruction set is loaded and executed by the processor to implement an adaptive recognition method.
In another aspect, the present invention provides a computer storage medium, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the storage medium, and the at least one instruction, at least one program, a set of codes, or a set of instructions is loaded by a processor and performs an adaptive recognition method.
The invention provides a self-adaptive identification method, a device, equipment and a medium. According to the invention, the face sample portrait corresponding to the target user identification is updated according to the current face characteristics, so that the face sample portrait corresponding to the target user identification is adaptively changed according to the current face characteristics, each face sample portrait stored in the face sample portrait set is automatically updated, the face sample portrait set can automatically adapt to the changes of external factors such as user images, shooting conditions and the like along with the change of time, and the high accuracy of the recognition result is maintained under the condition that no additional new storage burden is required and no user is required to actively supplement and input new face sample pictures.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an application scenario of an access control system provided by the invention;
FIG. 2 is a schematic diagram of another possible implementation scenario of the adaptive recognition method provided by the present invention;
FIG. 3 is a flow chart of an adaptive recognition method provided by the invention;
FIG. 4 is a flow chart of constructing a face sample portrait set provided by the invention;
FIG. 5 is a flow chart of another face sample portrait set construction provided by the present invention;
FIG. 6 is a flowchart for clustering individual face sample features in each face sample feature set provided by the present invention;
FIG. 7 is a schematic diagram of the present invention for obtaining a target user identifier according to a user selection;
FIG. 8 is a flow chart of face sample portraits corresponding to the target user identification updated according to the current face characteristics provided by the invention;
FIG. 9 is a flowchart of feature fusion between the current face features and the target face sample features to obtain fused face sample features;
FIG. 10 is a block diagram of an adaptive identification device provided by the present invention;
fig. 11 is a schematic hardware structure of an apparatus for implementing the method provided by the embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention will be further described in detail below with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention. In order to clearly describe the technical solutions described in the embodiments of the present invention, the embodiments of the present invention first explain related technical terms, technical background and technical environment:
machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
RGBA: RGBA is a color space representing Red Green Blue and Alpha.
As described in the background of the invention, it is often necessary in the prior art to maintain accuracy by increasing the number of face samples. Taking an access control scenario as an example, please refer to fig. 1, which shows an application scenario diagram of an access control system, in which the access control system needs to perform face recognition on a user. At the beginning of the operation of the access control system, the user allowed to pass through the access control system is firstly required to be registered to obtain at least one face sample picture of the user, and the access control system takes the face sample picture as the basis of face recognition. After the access control system operates, an access control machine in the access control system collects user images through a camera of the access control system, and compares the collected user images with face sample pictures to judge whether the owners of the user images are registered users.
However, there may be multiple entrance guard machines in the entrance guard system, each entrance guard machine has different installation positions and different lighting conditions, and the shooting conditions of each entrance guard machine may also be different from the acquisition conditions of the face sample pictures when the user registers, so that the recognition accuracy is reduced, and the entrance guard system can master more user information by inputting multiple face sample pictures, so that the accuracy can be maintained. For example, a plurality of face sample pictures are registered for a user, and if an image acquired by a camera in the operation stage of the access control system is matched with any face sample picture, the identification is passed, so that the identification accuracy is maintained.
With the advance of time, the image of the user may change, the acquisition environment of the camera may also change, and even if the user is registered, the user image acquired by the camera may also be far away from the face sample picture when the user is registered, thereby causing recognition failure. In order to reduce the occurrence probability of the situation, some access control systems require users to enter new face sample pictures at intervals so as to acquire the latest characteristic information of the users along with the progress of time and maintain the accuracy of identification.
Therefore, in order to maintain the recognition accuracy in the prior art, a plurality of face sample pictures need to be recorded for the user, and repeated recording of the face sample pictures causes waste of storage resources, so that the recognition speed is reduced, the burden of the user is increased, and the viscosity of the user is reduced. Therefore, the embodiment of the invention provides a self-adaptive identification method.
The self-adaptive recognition method disclosed by the embodiment of the invention can be applied to various scenes needing face recognition, so that the self-adaptive recognition method can be widely applied to a plurality of fields such as public security, banks, customs, airports, intelligent video monitoring, medical treatment and the like. The access control system shown in fig. 1 is a typical implementation scenario in the embodiment of the present invention. Further, please refer to fig. 2, which illustrates another possible implementation scenario diagram of the adaptive recognition method. The implementation scene comprises: the client 01, the server 03 and the database server 05, wherein the server 03 is in communication connection with the database server 05, and the database server 05 can also be built in the server 03. The database server 05 may be configured to obtain a face sample portrait according to at least one face sample picture, and the server 03 is configured to identify a user according to the face sample portrait stored in the database server 05, and feed back an identification result to the client 01.
The Client 01 may communicate with the Server 03 based on Browser/Server mode (B/S) or Client/Server mode (Client/Server, C/S). The client 01 takes a face sample picture of the user in the registration stage and transmits the face sample picture to the server 03 so that the server 03 can generate a face sample portrait corresponding to the user. The client 01 is further configured to capture an image of a user in a running stage, and transmit the image of the user to the server 03, so that the server 03 queries the database server 05 whether there is a face sample portrait matching with the image of the user, and feeds back a recognition result to the client 01 according to a query result. The server 03 is further configured to automatically update the face sample portrait according to the image of the user photographed in the operation stage.
The client 01 may include: the smart phones, tablet computers, notebook computers, digital assistants, smart wearable devices, vehicle terminals and other types of physical devices may also include software running in the physical devices, such as application programs and the like. For example, the client 01 may run security software, control software, and payment software.
The server 03 and the database server 05 may each comprise a server running independently, or a distributed server, or a server cluster consisting of a plurality of servers.
The invention provides a self-adaptive identification method, as shown in fig. 3, which comprises the following steps:
s101, constructing a face sample portrait set, wherein the face sample portrait set comprises a user identifier and a face sample portrait corresponding to the user identifier; each face sample representation includes at least one face feature.
In the embodiment of the invention, a face sample portrait is generated for each user corresponding to each user identifier, and the face sample portrait is used as the recognition basis.
In a possible embodiment, the face sample portrait includes at least one target face sample feature, one target face sample feature may be obtained from at least one face sample feature (face feature), the face sample feature (face feature) may correspond to a face sample picture taken by a user in a registration stage, and one face sample picture may extract one face sample feature.
In order to construct a face sample portrait set, in the embodiment of the invention, a user to be identified is registered, and one or more face sample pictures are obtained through registration.
In one possible embodiment, the constructing a face sample portrait set is shown in fig. 4, and includes:
s1011, for each user identifier, acquiring a face sample picture of the user pointed by the user identifier.
S1013, taking the face sample characteristics extracted from the face sample picture as target face sample characteristics, thereby constructing a face sample portrait.
S1015, constructing a face sample portrait set according to the corresponding relation between the user identification and the face sample portrait.
In another possible embodiment, the constructing a face sample portrait set is shown in fig. 5, and includes:
s1012, for each user identifier, acquiring a plurality of face sample pictures of the user pointed by the user identifier.
S1014, extracting a face sample feature for each face sample picture to obtain a face sample feature set corresponding to each user.
S1016, clustering the face sample features in each face sample feature set, obtaining at least one target face sample feature according to a clustering result, and constructing a face sample portrait according to the obtained target face sample feature.
In a possible embodiment of the present invention, each face sample feature may be represented by a vector, which may be represented as a point in a multidimensional space, and accordingly, a plurality of face sample features in a face sample feature set form a point set in the multidimensional space, and a clustering result may be obtained for the point set by clustering, and each cluster center (at least one) in the clustering result may be used as a target face sample feature, thereby obtaining a face sample portrait.
The algorithm for clustering the point set in the embodiment of the present invention is not limited, and in a possible implementation manner, the clustering the face sample features in each face sample feature set, as shown in fig. 6, includes:
s10161, representing the face sample characteristics in the face sample characteristic set by multi-dimensional space points in a multi-dimensional space to obtain a multi-dimensional space point set corresponding to the face sample characteristic set.
S10163, randomly selecting K elements from the multidimensional space point set to serve as the center of the current cluster.
Specifically, K may be set according to actual requirements.
S10165, calculating dissimilarity between the multidimensional space point and the centers of K current clusters for each other multidimensional space point, and classifying the multidimensional space point into the cluster with the lowest dissimilarity.
In the embodiment of the invention, the Euclidean distance can be used for evaluating the dissimilarity, and the larger the Euclidean distance is, the larger the dissimilarity is.
S10167, recalculating the cluster centers of the K clusters according to the classification result; taking the calculated cluster center as the current cluster center, and repeatedly executing the step S5 until the grouping result is not changed.
S10169, outputting the current scoring result as a clustering result.
Correspondingly, the cluster center of each cluster in the current classification result corresponds to the target face sample characteristic.
S1018, constructing a face sample portrait set according to the corresponding relation between the user identification and the face sample portrait.
S103, acquiring a current user image.
Specifically, a user image can be shot by a camera in response to a preset instruction to serve as a current user image, a video stream can be obtained in real time through the camera, and a current video frame image containing a human face in the video stream is determined to be the current user image.
S105, extracting the current face features according to the current user image.
In a possible embodiment, the current face feature is obtained by extracting RGBA data in the current user image and transmitting the RGBA data to a preset feature extraction network. The feature extraction network may be obtained by machine learning.
In a possible embodiment, the feature extraction network is trained by using a first generation resist learning method, where the first generation resist learning method includes: training a first generation network and a first countermeasure network, wherein the converged first generation network is used as the characteristic extraction network; the training data for training the first generation network comprises a first positive example and a first negative example, wherein the first positive example comprises a plurality of face feature matrixes extracted from a plurality of images comprising faces shot by a camera, and the first negative example comprises a plurality of face feature matrixes extracted from a plurality of pictures comprising faces; the first countermeasure network is used for judging positive and negative examples of the face feature matrix extracted by the first generating network to obtain a first judging result; the first judging result is used for calculating the first positive example and the first negative example to obtain a first loss value, and the first loss value is used for updating the parameters of the first generation network and the parameters of the first countermeasure network until the first generation network converges; and taking the converged first generation network as the characteristic extraction network.
In the embodiment of the invention, the feature extraction network is constructed by using the countermeasure learning method, and the framework of the countermeasure learning comprises a generation network and a discrimination network. The generation of a network is a model that requires learning to be trained, the purpose of which is to generate one type of data (positive examples) that is indistinguishable from another type of known data (negative examples). Discrimination of the network then requires discrimination between positive and negative examples, which aims at counteracting the generating network, forcing the generating network to generate more indistinguishable data. The aim of the countermeasure learning method is to train a generated network model that can generate artificial data consistent with known data. In the application, a first countermeasure learning method is adopted to train the first generation network, and the feature extraction network is obtained; the feature extraction network can be made difficult to distinguish between the face feature matrix extracted from the image captured by the camera and the face feature matrix extracted from the picture.
In step S105, the face feature matrix output by the feature extraction network is used as the current face feature corresponding to the current user image. The feature extraction network may also be used to extract face sample features for face sample pictures entered by the user during the user registration phase.
S107, inquiring a matched face feature set in the face sample portrait set according to the current face feature, wherein the dissimilarity degree of elements in the matched face feature set and the current face feature is smaller than a preset threshold.
According to the above, each face sample portrait in the face sample portrait set includes at least one target face sample feature, and then the query of the matching face feature in the face sample portrait set according to the current face feature, where the dissimilarity between the matching face feature and the current face feature is smaller than a preset threshold, includes: traversing each target face sample feature in each face sample portrait in the face sample portrait set, calculating the dissimilarity degree of the target face sample feature and the current face feature, and adding the target face sample feature with the dissimilarity degree smaller than the preset threshold value into a matching face feature set.
S109, if the matched face feature set is not empty, determining a target user identification according to the matched face feature set; and updating the face sample portrait corresponding to the target user identifier according to the current face characteristics.
In a possible embodiment, if only one matching face feature in the matching face feature set is used, the matching face feature is directly determined to be a target face feature, and a user identifier corresponding to a face sample portrait including the target face feature in the face sample portrait set is determined to be a target user identifier.
If the matched face feature set comprises a plurality of elements, extracting a user identifier corresponding to a matched face sample portrait, wherein the matched face sample portrait is a face sample portrait comprising any element in the matched face feature set in the face sample portrait; displaying the related information of the user identification corresponding to the matching face sample portrait; and determining the target user identification in response to the selection instruction.
As shown in fig. 7, if the matching face feature set includes a plurality of elements, relevant information pointed by the user identifier corresponding to the matching face sample portrait corresponding to the matching face feature set may be displayed, and selected by the user to obtain the target user identifier. If the matched face feature set comprises more than one element, the current recognition result is inaccurate, subjective selection of a user is needed, the face sample portrait set is updated based on the result selected by the user, so that the user can obtain an accurate recognition result in next recognition, and the technical scheme in the embodiment of the invention can obtain a more excellent recognition effect along with the advancement of time, and has self-learning capability.
According to the embodiment of the invention, the face sample portrait corresponding to the target user identification is updated according to the current face characteristics, so that the face sample portrait corresponding to the target user identification is adaptively changed according to the current face characteristics, each face sample portrait stored in the face sample portrait set is automatically updated, the change of the face sample portrait set along with the change of time can be automatically adapted to the change of external factors such as user images, shooting conditions and the like, and the high accuracy of the identification result is maintained under the condition that a new storage burden is not required to be additionally increased and a user is not required to actively supplement and input a new face sample portrait.
In a possible embodiment, the updating the face sample portrait corresponding to the target user identifier according to the current face feature, as shown in fig. 8, includes:
s1091, determining target face sample characteristics corresponding to the target user identification in the matched face characteristic set.
S1093, carrying out feature fusion on the current face features and the target face sample features to obtain fused face sample features.
Feature fusion is the combination of features that are differentially meaningful and complementary, organically joined together in some fashion as a unified feature. The information contained in the fused features is more accurate and richer. It can be appreciated that the fused face sample features are more accurate and richer than the original target face sample features or the current face features.
In one possible embodiment, to ensure that the fused face sample features have a stronger pointing effect, the weight may be adjusted during the fusion process such that the current face features are weighted higher than the target face sample features.
In a possible embodiment, the feature fusion between the current face feature and the target face sample feature to obtain a fused face sample feature, as shown in fig. 9, includes:
s10931, acquiring the weight of the current face feature according to the feature weight network.
The invention is not limited to a specific acquisition mode of the weight of the current face feature.
In a possible implementation manner, the feature weight network is trained by using a second generating countermeasure learning method, and the second generating countermeasure learning method includes: training a second generation network and a second countermeasure network, wherein the obtained converged second generation network is used as the characteristic weight network; the training data used for training the second generation network comprises a second positive example and a second negative example, the second positive example comprises a plurality of face feature matrixes obtained from a plurality of pictures comprising faces which are arranged in a time sequence within a preset time period, the second negative example comprises a plurality of simulated face feature matrixes, the simulated face feature matrixes are obtained by weighting and fitting related pictures, and the weight is obtained by the second generation network; the second countermeasure network is used for carrying out positive and negative case discrimination to obtain a second discrimination result; and the second judging result is used for calculating with the second positive example and the second negative example to obtain a second loss value, and the second loss value is used for updating the parameters of the second generation network and the parameters of the second countermeasure network until the second generation network converges.
And taking the weight output by the converged second generation network as the weight of the current face feature.
S10933, calculating the weight of the target face sample feature according to the weight of the current face feature.
S10935, obtaining the fused face sample characteristics based on a weighted average method according to the current face characteristics and the target face sample characteristics.
S1095, replacing the target face sample features corresponding to the target user identification with the fused face sample features in the face sample portrait set.
According to the embodiment of the invention, the step of updating the face sample portrait corresponding to the target user identifier according to the current face features does not cause the change of the number of face sample portraits in the face sample portrait set or the change of the number of target face sample features in the face sample portrait, so that the aim of adaptively updating the face sample portrait set is achieved, and the waste of storage space and the excessive dependence on users in the prior art are avoided.
The embodiment of the invention discloses a self-adaptive identification device, as shown in fig. 10, comprising:
a face sample portrait set construction module 201, configured to construct a face sample portrait set, where the face sample portrait set includes a user identifier and a face sample portrait corresponding to the user identifier; each face sample representation includes at least one face feature;
A current user image acquisition module 203, configured to acquire a current user image;
a current face feature extraction module 205, configured to extract a current face feature according to the current user image;
a matching module 207, configured to query a matching face feature set in the face sample portrait set according to the current face feature, where a dissimilarity degree between elements in the matching face feature set and the current face feature is less than a preset threshold;
an updating module 209, configured to determine a target user identifier according to the matched face feature set if the matched face feature set is not empty; and updating the face sample portrait corresponding to the target user identifier according to the current face characteristics.
Specifically, the embodiments of the adaptive recognition device and the method according to the embodiments of the present invention are based on the same inventive concept, and the implementation details of the adaptive recognition device according to the embodiments of the present invention may refer to the method embodiment, which is not described herein again.
The embodiment of the invention also provides a computer storage medium, which can store a plurality of instructions, the instructions are suitable for being loaded by a processor and executing the adaptive identification method according to the embodiment of the invention, and the method at least comprises the following steps:
A method of adaptive recognition, the method comprising:
constructing a face sample portrait set, wherein the face sample portrait set comprises a user identifier and a face sample portrait corresponding to the user identifier; each face sample representation includes at least one face feature;
acquiring a current user image;
extracting current face features according to the current user image;
inquiring a matched face feature set according to the current face feature in the face sample portrait set, wherein the dissimilarity degree between elements in the matched face feature set and the current face feature is smaller than a preset threshold;
if the matched face feature set is not empty, determining a target user identification according to the matched face feature set; and updating the face sample portrait corresponding to the target user identifier according to the current face characteristics.
In a possible embodiment, the constructing a face sample portrait set includes:
for each user identifier, acquiring a face sample picture of a user pointed by the user identifier;
taking the face sample features extracted from the face sample pictures as target face sample features, so as to construct face sample portraits;
and constructing a face sample portrait set according to the corresponding relation between the user identification and the face sample portrait.
In a possible embodiment, the constructing a face sample portrait set includes;
for each user identifier, acquiring a plurality of face sample pictures of a user pointed by the user identifier;
extracting a face sample feature for each face sample picture to obtain a face sample feature set corresponding to each user;
clustering the face sample features in each face sample feature set, obtaining at least one target face sample feature according to a clustering result, and constructing a face sample portrait according to the obtained target face sample feature;
and constructing a face sample portrait set according to the corresponding relation between the user identification and the face sample portrait.
In a possible embodiment, the clustering the face sample features in each face sample feature set includes:
representing the face sample characteristics in the face sample characteristic set by multi-dimensional space points in a multi-dimensional space to obtain a multi-dimensional space point set corresponding to the face sample characteristic set;
randomly selecting K elements from the multidimensional space point set to serve as the center of the current cluster;
for each other multidimensional space point, calculating the dissimilarity of the multidimensional space point to the centers of K current clusters respectively, and classifying the multidimensional space point to the cluster with the lowest dissimilarity;
Recalculating the cluster centers of the K clusters according to the classification result; taking the calculated cluster center as the current cluster center, and repeating the steps: for each other multidimensional space point, calculating the dissimilarity of the multidimensional space point to the centers of K current clusters respectively, and classifying the multidimensional space point to the cluster with the lowest dissimilarity; until the result of the classification is not changed any more;
and outputting the current dividing and returning result as a clustering result.
In a possible embodiment, if the matching face feature set is not empty, determining the target user identifier according to the matching face feature set includes:
if only one matching face feature is in the matching face feature set, the matching face feature is directly determined to be a target face feature, and a user identifier corresponding to a face sample portrait including the target face feature in the face sample portrait set is determined to be a target user identifier.
If the matched face feature set comprises a plurality of elements, extracting a user identifier corresponding to a matched face sample portrait, wherein the matched face sample portrait is a face sample portrait comprising any element in the matched face feature set in the face sample portrait; displaying the related information of the user identification corresponding to the matching face sample portrait; and determining the target user identification in response to the selection instruction.
In a possible embodiment, the updating the face sample portrait corresponding to the target user identifier according to the current face feature includes:
determining target face sample characteristics corresponding to the target user identification in the matched face characteristic set;
performing feature fusion on the current face features and the target face sample features to obtain fused face sample features;
and substituting the fused face sample characteristics for the target face sample characteristics corresponding to the target user identification in the face sample portrait set.
In a possible embodiment, the feature fusing the current face feature and the target face sample feature to obtain a fused face sample feature includes:
acquiring the weight of the current face feature according to the feature weight network;
calculating the weight of the target face sample feature according to the weight of the current face feature;
and obtaining a fused face sample characteristic based on a weighted average method according to the current face characteristic and the target face sample characteristic.
Further, fig. 11 shows a schematic diagram of a hardware structure of an apparatus for implementing the method provided by the embodiment of the present invention, where the apparatus may participate in forming or including the device or the system provided by the embodiment of the present invention. As shown in fig. 11, the apparatus 10 may include one or more processors 102 (shown as 102a, 102b, … …,102 n) which may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, a memory 104 for storing data, and a transmission device 106 for communication functions. In addition, the method may further include: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power supply, and/or a camera. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 11 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the device 10 may also include more or fewer components than shown in FIG. 11, or have a different configuration than shown in FIG. 11.
It should be noted that the one or more processors 102 and/or other data processing circuits described above may be referred to generally herein as "data processing circuits. The data processing circuit may be embodied in whole or in part in software, hardware, firmware, or any other combination. Further, the data processing circuitry may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the device 10 (or mobile device). As referred to in the embodiments of the present application, the data processing circuit acts as a processor control (e.g., selection of the path of the variable resistor termination to interface).
The memory 104 may be used to store software programs and modules of application software, and the processor 102 executes the software programs and modules stored in the memory 104 to perform various functional applications and data processing, i.e., implement an adaptive recognition method as described above, according to the program instructions/data storage device corresponding to the method according to the embodiments of the present invention. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 104 may further include memory located remotely from processor 102, which may be connected to device 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communications provider of device 10. In one example, the transmission device 106 includes a network adapter (NetworkInterfaceController, NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a radio frequency (RadioFrequency, RF) module for communicating wirelessly with the internet.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the device 10 (or mobile device).
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the device and server embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and references to the parts of the description of the method embodiments are only required.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (8)

1. An adaptive identification method, the method comprising:
for each user identifier, acquiring a plurality of face sample pictures of a user pointed by the user identifier; extracting a face sample feature for each face sample picture to obtain a face sample feature set corresponding to each user;
Representing the face sample characteristics in the face sample characteristic set by multi-dimensional space points in a multi-dimensional space to obtain a multi-dimensional space point set corresponding to the face sample characteristic set;
randomly selecting K elements from the multidimensional space point set to serve as the center of the current cluster; for each other multidimensional space point, calculating the dissimilarity of the multidimensional space point to the centers of K current clusters respectively, and classifying the multidimensional space point to the cluster with the lowest dissimilarity;
recalculating the cluster centers of the K clusters according to the classification result; taking the calculated cluster center as the current cluster center, and repeating the steps: for each other multidimensional space point, calculating the dissimilarity of the multidimensional space point to the centers of K current clusters respectively, and classifying the multidimensional space point to the cluster with the lowest dissimilarity; until the result of the classification is not changed any more; outputting the current dividing and returning result as a clustering result;
obtaining at least one target face sample feature according to the clustering result, and constructing a face sample portrait according to the obtained target face sample feature;
according to the corresponding relation between the user identification and the face sample portrait, a face sample portrait set is constructed, and the face sample portrait set comprises the user identification and the face sample portrait corresponding to the user identification; each face sample representation includes at least one face feature;
Acquiring a current user image;
extracting current face features according to the current user image;
inquiring a matched face feature set according to the current face feature in the face sample portrait set, wherein the dissimilarity degree between elements in the matched face feature set and the current face feature is smaller than a preset threshold;
if the matched face feature set is not empty, determining a target user identification according to the matched face feature set; and updating the face sample portrait corresponding to the target user identifier according to the current face characteristics.
2. The method of claim 1, wherein the constructing a face sample representation set comprises:
for each user identifier, acquiring a face sample picture of a user pointed by the user identifier;
taking the face sample features extracted from the face sample pictures as target face sample features, so as to construct face sample portraits;
and constructing a face sample portrait set according to the corresponding relation between the user identification and the face sample portrait.
3. The method of claim 1, wherein if the set of matching face features is not empty, determining the target user identification from the set of matching face features comprises:
If only one matching face feature is in the matching face feature set, directly determining the matching face feature as a target face feature, and determining a user identifier corresponding to a face sample portrait including the target face feature in the face sample portrait set as a target user identifier;
if the matched face feature set comprises a plurality of elements, extracting a user identifier corresponding to a matched face sample portrait, wherein the matched face sample portrait is a face sample portrait comprising any element in the matched face feature set in the face sample portrait; displaying the related information of the user identification corresponding to the matching face sample portrait; and determining the target user identification in response to the selection instruction.
4. The method of claim 1, wherein updating the face sample representation corresponding to the target user identification based on the current face feature comprises:
determining target face sample characteristics corresponding to the target user identification in the matched face characteristic set;
performing feature fusion on the current face features and the target face sample features to obtain fused face sample features;
And replacing the target face sample characteristics corresponding to the target user identification with the fused face sample characteristics in the face sample portrait set.
5. The method of claim 4, wherein feature fusing the current face feature with the target face sample feature to obtain a fused face sample feature, comprises:
acquiring the weight of the current face feature according to the feature weight network;
calculating the weight of the target face sample feature according to the weight of the current face feature;
and obtaining a fused face sample characteristic based on a weighted average method according to the current face characteristic and the target face sample characteristic.
6. An adaptive identification device, the device comprising:
the face sample portrait set construction module is used for acquiring a plurality of face sample pictures of a user pointed by the user identification for each user identification; extracting a face sample feature for each face sample picture to obtain a face sample feature set corresponding to each user; representing the face sample characteristics in the face sample characteristic set by multi-dimensional space points in a multi-dimensional space to obtain a multi-dimensional space point set corresponding to the face sample characteristic set; randomly selecting K elements from the multidimensional space point set to serve as the center of the current cluster; for each other multidimensional space point, calculating the dissimilarity of the multidimensional space point to the centers of K current clusters respectively, and classifying the multidimensional space point to the cluster with the lowest dissimilarity; recalculating the cluster centers of the K clusters according to the classification result; taking the calculated cluster center as the current cluster center, and repeating the steps: for each other multidimensional space point, calculating the dissimilarity of the multidimensional space point to the centers of K current clusters respectively, and classifying the multidimensional space point to the cluster with the lowest dissimilarity; until the result of the classification is not changed any more; outputting the current dividing and returning result as a clustering result; obtaining at least one target face sample feature according to the clustering result, and constructing a face sample portrait according to the obtained target face sample feature; according to the corresponding relation between the user identification and the face sample portrait, a face sample portrait set is constructed, and the face sample portrait set comprises the user identification and the face sample portrait corresponding to the user identification; each face sample representation includes at least one face feature;
The current user image acquisition module is used for acquiring a current user image;
the current face feature extraction module is used for extracting current face features according to the current user image;
the matching module is used for inquiring a matching face feature set in the face sample portrait set according to the current face feature, and the dissimilarity degree between elements in the matching face feature set and the current face feature is smaller than a preset threshold;
the updating module is used for determining a target user identifier according to the matched face feature set if the matched face feature set is not empty; and updating the face sample portrait corresponding to the target user identifier according to the current face characteristics.
7. An apparatus comprising a processor and a memory having stored therein at least one instruction, at least one program, code set, or instruction set that is loaded and executed by the processor to implement an adaptive recognition method as recited in any one of claims 1-5.
8. A computer storage medium having stored therein at least one instruction, at least one program, code set or instruction set, the at least one instruction, at least one program, code set or instruction set being loaded by a processor and performing an adaptive recognition method according to any one of claims 1-5.
CN201910699845.7A 2019-07-31 2019-07-31 Self-adaptive identification method, device, equipment and medium Active CN110472537B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910699845.7A CN110472537B (en) 2019-07-31 2019-07-31 Self-adaptive identification method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910699845.7A CN110472537B (en) 2019-07-31 2019-07-31 Self-adaptive identification method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN110472537A CN110472537A (en) 2019-11-19
CN110472537B true CN110472537B (en) 2024-01-16

Family

ID=68508436

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910699845.7A Active CN110472537B (en) 2019-07-31 2019-07-31 Self-adaptive identification method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN110472537B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298100A (en) * 2020-02-21 2021-08-24 深圳码隆科技有限公司 Data cleaning method, self-service equipment and storage medium
CN114387635A (en) * 2020-10-20 2022-04-22 杭州海康威视数字技术股份有限公司 Method and device for updating biological feature library and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017219391A1 (en) * 2016-06-24 2017-12-28 深圳市唯特视科技有限公司 Face recognition system based on three-dimensional data
CN109508700A (en) * 2018-12-28 2019-03-22 广州粤建三和软件股份有限公司 A kind of face identification method, system and storage medium
CN109784157A (en) * 2018-12-11 2019-05-21 口碑(上海)信息技术有限公司 A kind of image processing method, apparatus and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2007102021A (en) * 2007-01-19 2008-07-27 Корпораци "Самсунг Электроникс Ко., Лтд." (KR) METHOD AND SYSTEM OF IDENTITY RECOGNITION
NL2004829C2 (en) * 2010-06-07 2011-12-08 Univ Amsterdam Method for automated categorization of human face images based on facial traits.

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017219391A1 (en) * 2016-06-24 2017-12-28 深圳市唯特视科技有限公司 Face recognition system based on three-dimensional data
CN109784157A (en) * 2018-12-11 2019-05-21 口碑(上海)信息技术有限公司 A kind of image processing method, apparatus and system
CN109508700A (en) * 2018-12-28 2019-03-22 广州粤建三和软件股份有限公司 A kind of face identification method, system and storage medium

Also Published As

Publication number Publication date
CN110472537A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN111401344B (en) Face recognition method and device and training method and device of face recognition system
CN110599491B (en) Priori information-based eye image segmentation method, apparatus, device and medium
CN110765882B (en) Video tag determination method, device, server and storage medium
CN109872407B (en) Face recognition method, device and equipment, and card punching method, device and system
CN109101931A (en) A kind of scene recognition method, scene Recognition device and terminal device
CN110532884A (en) Pedestrian recognition methods, device and computer readable storage medium again
CN105430394A (en) Video data compression processing method, apparatus and equipment
CN110472537B (en) Self-adaptive identification method, device, equipment and medium
CN110866469A (en) Human face facial features recognition method, device, equipment and medium
CN110321761A (en) A kind of Activity recognition method, terminal device and computer readable storage medium
CN112419326B (en) Image segmentation data processing method, device, equipment and storage medium
CN115761529B (en) Image processing method and electronic device
CN108319888A (en) The recognition methods of video type and device, terminal
CN110796135A (en) Target positioning method and device, computer equipment and computer storage medium
CN111124902A (en) Object operating method and device, computer-readable storage medium and electronic device
CN112052771A (en) Object re-identification method and device
CN109726756A (en) Image processing method, device, electronic equipment and storage medium
CN112380955A (en) Action recognition method and device
CN113821678B (en) Method and device for determining video cover
CN111401193A (en) Method and device for obtaining expression recognition model and expression recognition method and device
CN114360018A (en) Rendering method and device of three-dimensional facial expression, storage medium and electronic device
CN113011403A (en) Gesture recognition method, system, medium, and device
CN111626212B (en) Method and device for identifying object in picture, storage medium and electronic device
CN113657154A (en) Living body detection method, living body detection device, electronic device, and storage medium
CN110276283B (en) Picture identification method, target identification model training method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant