CN116704581A - Face recognition method, device, equipment and storage medium - Google Patents

Face recognition method, device, equipment and storage medium Download PDF

Info

Publication number
CN116704581A
CN116704581A CN202310720333.0A CN202310720333A CN116704581A CN 116704581 A CN116704581 A CN 116704581A CN 202310720333 A CN202310720333 A CN 202310720333A CN 116704581 A CN116704581 A CN 116704581A
Authority
CN
China
Prior art keywords
training
sub
sample
target
face recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310720333.0A
Other languages
Chinese (zh)
Inventor
瞿晓阳
王健宗
陈远钧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202310720333.0A priority Critical patent/CN116704581A/en
Publication of CN116704581A publication Critical patent/CN116704581A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Finance (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Accounting & Taxation (AREA)
  • Medical Informatics (AREA)
  • Economics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Development Economics (AREA)
  • Evolutionary Computation (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a face recognition method, a face recognition device, face recognition equipment and a storage medium, and belongs to the technical field of financial science and technology. The method comprises the following steps: acquiring a plurality of pre-training sub-models; receiving a candidate deletion request to form a request set; when the first number of the candidate deletion requests is greater than or equal to a number threshold, determining a target sample sub-training set and target deletion requests; deleting the corresponding sample face image in the target sample sub-training set to obtain a forgetting sub-training set; and retraining the corresponding pre-trained sub-model according to the forgetting sub-training set to obtain a target face recognition model, and further recognizing the face image to be recognized to obtain a face recognition result. When the embodiment of the application uses the target face recognition model to process related business in the financial industry, the computational overhead of the face recognition model in the model forgetting process can be reduced, the training efficiency is effectively improved, the business processing efficiency of a financial business system is improved, and the user experience is ensured.

Description

Face recognition method, device, equipment and storage medium
Technical Field
The present application relates to, but not limited to, the field of financial technology, and in particular, to a face recognition method, apparatus, device, and storage medium.
Background
With the gradual maturation of pattern recognition technology, the biological recognition of biological individuals based on biological signs is beginning to be applied and promoted in the field of identity recognition, and many paystations have already deduced shortcut payment methods such as face-brushing payment based on face recognition.
When the financial business system processes related business in the financial industry, the identity authentication is usually realized by adopting face recognition, the financial business system can be an insurance system, a banking system, a transaction system and an order system, the face recognition is a biological recognition technology for carrying out identity recognition based on face characteristic information of people, in order to improve the recognition accuracy of the face recognition, a face recognition model is usually trained through real face data, the trained face recognition model can keep the memory of corresponding data, and when a user requires to delete the face data, the face data in a server is required to be deleted, and the face recognition model is required to be forgotten.
At present, when a face recognition model is forgotten, corresponding face data is deleted in a training set of a financial service system, and the processed training set is utilized to retrain the face recognition model, so that the model forgetting of the face recognition model is realized, but the data volume of the training set of the financial service system is usually large, the face recognition model needs to consume a large amount of calculation power in the model forgetting process, the training efficiency is low, the face recognition model cannot be normally used in the model forgetting process, the service processing efficiency of the financial service system is influenced, and the user experience is influenced.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The embodiment of the application provides a face recognition method, a device, equipment and a storage medium, which can reduce the computational overhead of a face recognition model in the model forgetting process, improve the training efficiency, further improve the service processing efficiency of a financial service system and ensure the user experience.
To achieve the above object, a first aspect of an embodiment of the present application provides a face recognition method, including: the method comprises the steps of obtaining a pre-training face recognition model and a plurality of sample sub-training sets, wherein the pre-training face recognition model comprises a plurality of pre-training sub-models, the sample sub-training sets correspond to the pre-training sub-models, the pre-training sub-models are obtained by training the corresponding sample sub-training sets, and any sample sub-training set comprises a plurality of sample face images; receiving a plurality of candidate deletion requests, and arranging the received candidate deletion requests according to a receiving time sequence to form a request set, wherein the candidate deletion requests are in one-to-one correspondence with the sample face images; when the first number of the candidate deletion requests in the request set is greater than or equal to a preset number threshold, determining a target sample sub-training set in each sample sub-training set according to the request set, and taking the candidate deletion requests corresponding to the target sample sub-training set as target deletion requests; deleting the target deleting request in the request set, deleting the corresponding sample face image in the target sample sub-training set according to the target deleting request, and obtaining a forgetting sub-training set; retraining the corresponding pre-training sub-model according to the forgetting sub-training set, and adjusting the pre-training face recognition model according to the retraining result to obtain a target face recognition model; and acquiring a face image to be recognized, and inputting the face image to be recognized into the target face recognition model to obtain a face recognition result.
In some embodiments, retraining the corresponding pre-trained sub-model according to the forgetting sub-training set, and adjusting the pre-trained face recognition model according to a retraining result to obtain a target face recognition model, including: detecting whether the candidate deletion request is received or not until a stop instruction is received; if the candidate deletion request is received, inserting the candidate deletion request into the request set according to the sequence of receiving time, and re-executing the steps of determining a target sample sub-training set in each sample sub-training set according to the request set when the first number of the candidate deletion requests in the request set is larger than or equal to a preset number threshold, and taking the candidate deletion request corresponding to the target sample sub-training set as a target deletion request to delete the corresponding sample face images in the target sample sub-training set to obtain a plurality of forgetting sub-training sets; sequentially retraining the corresponding pre-training sub-models according to each forgetting sub-training set to obtain forgetting sub-models corresponding to each pre-training sub-model; and according to all the forgetting sub-models, adjusting the pre-training face recognition model to obtain a target face recognition model.
In some embodiments, the acquiring a pre-trained face recognition model and a plurality of sample sub-training sets comprises: acquiring a pre-training face recognition model and a sample training set; and splitting the sample training set according to the number of the preset submodels to obtain a plurality of sample submanices, wherein the number of the sample submanices is the number of the submodels, and any two sample submanices have no intersection.
In some embodiments, the pre-trained face recognition model is trained by: acquiring an initial face recognition model and sample identity labels of the sample face images; splitting the initial face recognition model according to the number of the sub-models to obtain a plurality of initial sub-models, wherein the initial sub-models are in one-to-one correspondence with the sample sub-training sets; training the initial sub-model according to the corresponding sample sub-training set and the sample identity label aiming at any initial sub-model to obtain the pre-training sub-model; and obtaining a pre-training face recognition model according to all the pre-training sub-models.
In some embodiments, when the first number of the candidate deletion requests in the request set is greater than or equal to a preset number threshold, determining, according to the request set, a target sample sub-training set in each of the sample sub-training sets, and taking the candidate deletion request corresponding to the target sample sub-training set as a target deletion request, where the determining includes: when the first number of the candidate deletion requests in the request set is greater than or equal to a preset number threshold, determining the second number of the candidate deletion requests corresponding to each sample sub-training set according to the request set; taking the sample sub-training set corresponding to the second number with the largest value as a target sample sub-training set; and taking the candidate deletion request corresponding to the target sample sub-training set as a target deletion request.
In some embodiments, the target face recognition model comprises a plurality of target sub-models, wherein the target sub-models are in one-to-one correspondence with the pre-training sub-models; the step of obtaining the face image to be recognized, and inputting the face image to be recognized into the target face recognition model to obtain a face recognition result, comprises the following steps: acquiring a face image to be identified; inputting the face image to be recognized into each target sub-model; extracting features of the face image to be identified through the target sub-model, and outputting feature vectors; and carrying out fusion processing on the feature vectors output by each target submodel, and carrying out classification processing on fusion results to obtain face recognition results.
To achieve the above object, a second aspect of the embodiments of the present application provides a model training method, including: the method comprises the steps of obtaining a pre-training face recognition model and a plurality of sample sub-training sets, wherein the pre-training face recognition model comprises a plurality of pre-training sub-models, the sample sub-training sets correspond to the pre-training sub-models, the pre-training sub-models are obtained by training the corresponding sample sub-training sets, and any sample sub-training set comprises a plurality of sample face images; receiving a plurality of candidate deletion requests, and arranging the received candidate deletion requests according to a receiving time sequence to form a request set, wherein the candidate deletion requests are in one-to-one correspondence with the sample face images; when the first number of the candidate deletion requests in the request set is greater than or equal to a preset number threshold, determining a target sample sub-training set in each sample sub-training set according to the request set, and taking the candidate deletion requests corresponding to the target sample sub-training set as target deletion requests; deleting the target deleting request in the request set, deleting the corresponding sample face image in the target sample sub-training set according to the target deleting request, and obtaining a forgetting sub-training set; and retraining the corresponding pre-trained sub-model according to the forgetting sub-training set, and adjusting the pre-trained face recognition model according to the retraining result to obtain a target face recognition model.
To achieve the above object, a third aspect of an embodiment of the present application provides a face recognition apparatus, including: the face recognition training system comprises a first acquisition unit, a second acquisition unit and a training unit, wherein the first acquisition unit is used for acquiring a pre-training face recognition model and a plurality of sample sub-training sets, the pre-training face recognition model comprises a plurality of pre-training sub-models, the sample sub-training sets correspond to the pre-training sub-models, the pre-training sub-models are obtained by training the corresponding sample sub-training sets, and any one of the sample sub-training sets comprises a plurality of sample face images; the first receiving unit is used for receiving a plurality of candidate deletion requests and arranging the received candidate deletion requests according to the receiving time sequence to form a request set, wherein the candidate deletion requests are in one-to-one correspondence with the sample face images; a first judging unit, configured to determine, according to the request set, a target sample sub-training set in each sample sub-training set when a first number of the candidate deletion requests in the request set is greater than or equal to a preset number threshold, and take the candidate deletion request corresponding to the target sample sub-training set as a target deletion request; the first deleting unit is used for deleting the target deleting request in the request set, deleting the corresponding sample face image in the target sample sub-training set according to the target deleting request, and obtaining a forgetting sub-training set; the first retraining unit is used for retraining the corresponding pre-training sub-model according to the forgetting sub-training set, and adjusting the pre-training face recognition model according to a retraining result to obtain a target face recognition model; the first recognition unit is used for acquiring a face image to be recognized, inputting the face image to be recognized into the target face recognition model, and obtaining a face recognition result.
To achieve the above object, a fourth aspect of the embodiments of the present application provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and the processor executes the computer program to implement the face recognition method described in the first aspect or the model training method described in the second aspect.
To achieve the above object, a fifth aspect of the embodiments of the present application proposes a storage medium that is a computer-readable storage medium, the storage medium storing a computer program that, when executed by a processor, implements the face recognition method described in the first aspect or the model training method described in the second aspect.
The application provides a face recognition method, a device, equipment and a storage medium, wherein the embodiment of the application comprises the following steps: the method comprises the steps of obtaining a pre-training face recognition model and a plurality of sample sub-training sets, wherein the pre-training face recognition model comprises a plurality of pre-training sub-models, the sample sub-training sets correspond to the pre-training sub-models, the pre-training sub-models are obtained by training the corresponding sample sub-training sets, and any sample sub-training set comprises a plurality of sample face images; receiving a plurality of candidate deletion requests, and arranging the received candidate deletion requests according to a receiving time sequence to form a request set, wherein the candidate deletion requests are in one-to-one correspondence with the sample face images; when the first number of the candidate deletion requests in the request set is greater than or equal to a preset number threshold, determining a target sample sub-training set in each sample sub-training set according to the request set, and taking the candidate deletion requests corresponding to the target sample sub-training set as target deletion requests; deleting the target deleting request in the request set, deleting the corresponding sample face image in the target sample sub-training set according to the target deleting request, and obtaining a forgetting sub-training set; retraining the corresponding pre-training sub-model according to the forgetting sub-training set, and adjusting the pre-training face recognition model according to the retraining result to obtain a target face recognition model; and acquiring a face image to be recognized, and inputting the face image to be recognized into the target face recognition model to obtain a face recognition result. According to the scheme provided by the embodiment of the application, the pre-training face recognition model is divided into a plurality of pre-training sub-models, the sample sub-training sets corresponding to the pre-training sub-models are obtained, then the candidate deletion request is received, after the candidate deletion request exceeding the number threshold is received, the target sample sub-training set is determined in the sample sub-training set according to the request set, the target deletion request is determined, further, in the target sample sub-training set, the sample face image corresponding to the target deletion request is deleted, the forgetting sub-training set is obtained, the pre-training sub-models are retrained by utilizing the forgetting sub-training set, the model forgetting operation is carried out only when the candidate deletion request is larger than the number threshold through the number threshold, the retrained times can be effectively reduced, and in addition, when the model forgetting operation is carried out, the corresponding pre-training sub-model is retrained only by utilizing the forgetting sub-training set, and all sample face images are not required to be processed, so that the calculation cost of the model forgetting process can be further reduced, the training efficiency is effectively improved, and the service processing efficiency of a financial service system is improved, and the user experience is ensured.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate and do not limit the application.
Fig. 1 is a flowchart of a face recognition method according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of retraining a pre-training sub-model provided in another embodiment of the application;
FIG. 3 is a flow chart of a method for obtaining a sample sub-training set according to another embodiment of the present application;
FIG. 4 is a flowchart of a method for pre-training a face recognition model according to another embodiment of the present application;
FIG. 5 is a flow chart of a method of determining a target sample sub-training set according to another embodiment of the present application;
fig. 6 is a flowchart of a method for obtaining a face recognition result according to another embodiment of the present application;
FIG. 7 is a flow chart of a model training method provided by another embodiment of the present application;
fig. 8 is a schematic structural diagram of a face recognition device according to another embodiment of the present application;
FIG. 9 is a schematic diagram of a model training apparatus according to another embodiment of the present application;
fig. 10 is a schematic hardware structure of an electronic device according to another embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In the description of the present application, the meaning of a number is one or more, the meaning of a number is two or more, and greater than, less than, exceeding, etc. are understood to exclude the present number, and the above, below, within, etc. are understood to include the present number.
It should be noted that although functional block division is performed in a device diagram and a logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. The terms first, second and the like in the description, in the claims and in the above-described figures, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
First, several nouns involved in the present application are parsed:
artificial intelligence (Artificial Intelligence, AI): is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding the intelligence of people; artificial intelligence is a branch of computer science that attempts to understand the nature of intelligence and to produce a new intelligent machine that can react in a manner similar to human intelligence, research in this field including robotics, language recognition, image recognition, natural language processing, and expert systems. Artificial intelligence can simulate the information process of consciousness and thinking of people. Artificial intelligence is also a theory, method, technique, and application system that utilizes a digital computer or digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
At present, when a face recognition model is forgotten, corresponding face data is deleted in a training set of a financial service system, and the processed training set is utilized to retrain the face recognition model, so that the model forgetting of the face recognition model is realized, but the data volume of the training set of the financial service system is usually large, the face recognition model needs to consume a large amount of calculation power in the model forgetting process, the training efficiency is low, the face recognition model cannot be normally used in the model forgetting process, the service processing efficiency of the financial service system is influenced, and the user experience is influenced.
Aiming at the problems that a face recognition model needs to consume a large amount of calculation power in the model forgetting process, and has low training efficiency and influence on user experience, the application provides a face recognition method, a device, equipment and a storage medium, wherein the method comprises the following steps: the method comprises the steps of obtaining a pre-training face recognition model and a plurality of sample sub-training sets, wherein the pre-training face recognition model comprises a plurality of pre-training sub-models, the sample sub-training sets correspond to the pre-training sub-models, the pre-training sub-models are obtained by training corresponding sample sub-training sets, and any sample sub-training set comprises a plurality of sample face images; receiving a plurality of candidate deletion requests, and arranging the received plurality of candidate deletion requests according to a receiving time sequence to form a request set, wherein the candidate deletion requests are in one-to-one correspondence with the sample face images; when the first number of candidate deletion requests in the request set is greater than or equal to a preset number threshold, determining a target sample sub-training set in each sample sub-training set according to the request set, and taking the candidate deletion requests corresponding to the target sample sub-training set as target deletion requests; deleting the target deleting request in the request set, deleting the corresponding sample face image in the target sample sub-training set according to the target deleting request, and obtaining a forgetting sub-training set; retraining a corresponding pre-training sub-model according to the forgetting sub-training set, and adjusting the pre-training face recognition model according to the retraining result to obtain a target face recognition model; and acquiring a face image to be recognized, and inputting the face image to be recognized into a target face recognition model to obtain a face recognition result. According to the scheme provided by the embodiment of the application, the pre-training face recognition model is divided into a plurality of pre-training sub-models, the sample sub-training sets corresponding to the pre-training sub-models are obtained, then the candidate deletion request is received, after the candidate deletion request exceeding the number threshold is received, the target sample sub-training set is determined in the sample sub-training set according to the request set, the target deletion request is determined, further, in the target sample sub-training set, the sample face image corresponding to the target deletion request is deleted, the forgetting sub-training set is obtained, the pre-training sub-models are retrained by utilizing the forgetting sub-training set, the model forgetting operation is carried out only when the candidate deletion request is larger than the number threshold through the number threshold, the retrained times can be effectively reduced, and in addition, when the model forgetting operation is carried out, the corresponding pre-training sub-model is retrained only by utilizing the forgetting sub-training set, and all sample face images are not required to be processed, so that the calculation cost of the model forgetting process can be further reduced, the training efficiency is effectively improved, and the service processing efficiency of a financial service system is improved, and the user experience is ensured.
The method, device, equipment and storage medium for face recognition provided by the embodiment of the application are specifically described by the following embodiment, and the method for face recognition in the embodiment of the application is described first.
The embodiment of the application provides a face recognition method, and relates to the technical field of financial science and technology. The face recognition method provided by the embodiment of the application can be applied to the terminal, can be applied to the server side, and can also be software running in the terminal or the server side. In some embodiments, the terminal may be a smart phone, tablet, notebook, desktop, etc.; the server side can be configured as an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms and the like; the software may be an application or the like that implements the face recognition method, but is not limited to the above form.
The application is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
It should be noted that, in each specific embodiment of the present application, when related processing is required according to user information, user behavior data, user history data, user location information, and other data related to user identity or characteristics, permission or consent of the user is obtained first, and the collection, use, processing, and the like of the data comply with related laws and regulations and standards. In addition, when the embodiment of the application needs to acquire the sensitive personal information of the user, the independent permission or independent consent of the user is acquired through popup or jump to a confirmation page and the like, and after the independent permission or independent consent of the user is definitely acquired, the necessary relevant data of the user for enabling the embodiment of the application to normally operate is acquired.
Embodiments of the present application will be further described below with reference to the accompanying drawings.
As shown in fig. 1, fig. 1 is a flowchart of a face recognition method according to an embodiment of the present application. The face recognition method comprises the following steps:
step S110, a pre-training face recognition model and a plurality of sample sub-training sets are obtained, wherein the pre-training face recognition model comprises a plurality of pre-training sub-models, the sample sub-training sets correspond to the pre-training sub-models, the pre-training sub-models are obtained by training the corresponding sample sub-training sets, and any sample sub-training set comprises a plurality of sample face images;
Step S120, a plurality of candidate deletion requests are received, and the received plurality of candidate deletion requests are arranged according to the sequence of receiving time to form a request set, wherein the candidate deletion requests are in one-to-one correspondence with the sample face images;
step S130, when the first number of candidate deletion requests in the request set is greater than or equal to a preset number threshold, determining a target sample sub-training set in each sample sub-training set according to the request set, and taking the candidate deletion request corresponding to the target sample sub-training set as a target deletion request;
step S140, deleting the target deleting request in the request set, and deleting the corresponding sample face image in the target sample sub-training set according to the target deleting request to obtain a forgetting sub-training set;
step S150, retraining a corresponding pre-training sub-model according to the forgetting sub-training set, and adjusting the pre-training face recognition model according to the retraining result to obtain a target face recognition model;
step S160, a face image to be recognized is obtained, and the face image to be recognized is input into a target face recognition model to obtain a face recognition result.
It can be understood that, when processing related business in financial industry, identity authentication is usually realized by adopting face recognition, in order to improve the recognition accuracy of the face recognition model, a large amount of face data of users are required to be adopted to train the face recognition model, the trained face recognition model can keep the memory of the face data, when the users require to delete the face data of the users, candidate deletion requests from target users are received, when the first number of the candidate deletion requests is greater than or equal to a number threshold, a target sample sub-training set is determined, a target deletion request is determined in a request set, then the sample face image corresponding to the target deletion request is deleted in the target sample sub-training set, a remain sub-training set is obtained, and the pre-training sub-model is retrained by utilizing the remain sub-training set, so that the memory of the corresponding sample face image kept by the pre-training sub-model is gradually reduced, when the memory of the corresponding sample face image is reduced to be ignored, the retrained of the face recognition of the pre-training sub-model is terminated, and the target face recognition model is further determined, so that the model of the face recognition of the target face recognition model is realized, and the face recognition of the face recognition image of the users is not leaked when the target model is recognized; based on the method, the pre-training face recognition model is divided into a plurality of pre-training sub-models, sample sub-training sets corresponding to the pre-training sub-models are obtained, candidate deletion requests are received, after the candidate deletion requests exceeding a quantity threshold are received, target sample sub-training sets are determined in the sample sub-training sets according to the request sets, and target deletion requests are determined, further, sample face images corresponding to the target deletion requests are deleted in the target sample sub-training sets, so that a forgetting sub-training set is obtained, the pre-training sub-models are retrained by utilizing the forgetting sub-training sets, the model forgetting operation is carried out only when the quantity threshold is set, the calculation cost of the model forgetting process can be effectively reduced, in addition, when the model forgetting operation is carried out each time, only the forgetting sub-training sub-models corresponding to the forgetting sub-training sets are utilized, the calculation cost of the model forgetting process is not required to be processed, the calculation cost of the model forgetting process can be further reduced, the training efficiency is effectively improved, and the service processing efficiency of a financial service system is further improved, and the user experience is ensured.
Taking a banking system as an example, when a user applies for or credit cards through the banking system, identity authentication is usually realized by adopting face recognition, the banking system usually needs to train a face recognition model through real face data, and when the user requires to delete the face data, the user needs to forget the face recognition model.
For example, when a user needs to apply for loan, the user accesses the banking system through the terminal, the banking system sends an authentication instruction to the terminal, the terminal displays an authentication permission prompt on the screen of the terminal after receiving the authentication instruction, after obtaining the permission or consent of the user, the terminal can obtain the face image to be recognized of the user and send the face image to be recognized to the banking system, the banking system authenticates the identity of the user by using the face recognition model, after passing the authentication, the subsequent loan application process can be carried out, the face recognition model of the banking system is a pre-training face recognition model, the pre-training face recognition model comprises a plurality of pre-training sub-models, the pre-training sub-models of the banking system are trained by the corresponding sample sub-training sets, in order to ensure the accuracy of the face recognition model of the banking system, a large number of sample face images are required to be used for training, when the first number of candidate deletion requests in the request set is larger than or equal to a number threshold, the corresponding sample face images in the target sample sub-training set are deleted, the corresponding sample face images in the target sample sub-training set are obtained, the corresponding sample face images are forgotten, the pre-training sub-training models are retrained, the corresponding pre-training sub-models are retrained, the subsequent loan application process is carried out according to the result, the pre-training face recognition model is adjusted, the pre-training model is carried out, the target face recognition model is has been obtained, the face recognition model is has been forgotten, and the threshold is better, and the training process has been better performance been carried out by the training process when the training model is has been better training process been has been performed and has been better process and has been trained by the user training process training model and has, and ensuring user experience.
It should be noted that, for the S pre-training sub-models of the banking system, the S sample sub-training sets are correspondingly provided, the i-th pre-training sub-model M i From the ith sample sub-training set D i Training to obtain, receiving candidate deletion requests, setting a quantity threshold as mu, setting the first quantity of the request sets as K, determining a target sample sub-training set when K is more than or equal to mu, determining a target deletion request in the request sets, deleting the target sample sub-training set, and retraining a pre-training sub-model.
In specific practice, financial industry related services include, but are not limited to: business, payment, trade, securities, banking, tax, credit card, shopping and insurance services.
In addition, referring to fig. 2, in an embodiment, step S150 in the embodiment shown in fig. 1 includes, but is not limited to, the following steps:
step S210, detecting whether a candidate deletion request is received or not until a stop instruction is received;
step S220, if a candidate deletion request is received, inserting the candidate deletion request into a request set according to the sequence of receiving time, and re-executing the step of determining a target sample sub-training set in each sample sub-training set according to the request set when the first number of the candidate deletion requests in the request set is greater than or equal to a preset number threshold, and taking the candidate deletion request corresponding to the target sample sub-training set as the target deletion request so as to delete the corresponding sample face images in the target sample sub-training set to obtain a plurality of forgetting sub-training sets;
Step S230, sequentially retraining corresponding pre-training sub-models according to each forgetting sub-training set to obtain forgetting sub-models corresponding to each pre-training sub-model;
and step S240, adjusting the pre-training face recognition model according to all forgetting sub-models to obtain the target face recognition model.
It will be appreciated that, before receiving the stop command, the candidate deletion request is continuously received, and steps of deleting the face images of the samples in the target sample sub-training set are cyclically performed, for example, the number threshold is set to μ, the current first number is K, and K is greater than or equal to μ, the target sample sub-training set is determined in the sample sub-training sets of the banking system, and the target sample sub-training set is assumed to be D x The number of the corresponding target deletion requests in the target sample sub-training set is R, the target deletion requests in the request set are deleted, and the target sample sub-training set D is subjected to x Performing deletion processing, wherein the current first number is K-R 1 If N candidate deletion requests continue to be received, when K-R 1 +N is larger than or equal to mu, determining a target sample sub-training set in the sample sub-training set again, and assuming that the target sample sub-training set is D y The number of the corresponding target deletion requests in the target sample sub-training set is R 2 Deleting target deleting requests in the request set and aiming at the target sample sub-training set D y And performing deleting treatment, circularly executing the step of deleting the sample face images in the target sample sub-training set until a stopping instruction is received, and then retraining the corresponding pre-training sub-models according to each target sample sub-training set after deleting treatment to obtain each forgetting sub-model, thereby obtaining the target face recognition model.
In any two deleting processes, the target deleting requests respectively corresponding to the two deleting processes are used for deleting the sample face images in the same target sample sub-training set, and the target sample sub-training set can repeatedly execute deleting processing.
In addition, referring to fig. 3, in an embodiment, step S110 in the embodiment shown in fig. 1 includes, but is not limited to, the following steps:
step S310, a pre-training face recognition model and a sample training set are obtained;
step S320, splitting the sample training set according to the preset number of sub-models to obtain a plurality of sample sub-training sets, wherein the number of the sample sub-training sets is the number of the sub-models, and no intersection exists between any two sample sub-training sets.
It can be appreciated that the sample training set D is obtained first full Splitting the sample training set through the number of sub-models, and assuming that the number of the sub-models of the banking system is S, obtaining S sample sub-training sets, wherein the sample sub-training sets and the pre-training sub-models can be in one-to-one correspondence, and the relation between the sample training sets and the sample sub-training sets is as follows:
wherein U represents the union operation of the set, D i For the ith sample sub-training set.
In addition, referring to fig. 4, in an embodiment, the training step of the pre-training face recognition model in step S110 in the embodiment shown in fig. 1 includes, but is not limited to, the following steps:
step S410, obtaining an initial face recognition model and sample identity labels of each sample face image;
step S420, splitting the initial face recognition model according to the number of the sub-models to obtain a plurality of initial sub-models, wherein the initial sub-models are in one-to-one correspondence with the sample sub-training sets;
step S430, training the initial sub-model according to the corresponding sample sub-training set and the sample identity label aiming at any initial sub-model to obtain a pre-training sub-model;
step S440, obtaining a pre-training face recognition model according to all the pre-training sub-models.
It can be understood that, firstly, an initial face recognition model is acquired, split processing is performed on the initial face recognition model through the number of sub-models, the number of sub-models of a banking system is assumed to be S, S initial sub-models are obtained, a sample sub-training set and the initial sub-models can be in one-to-one correspondence, for any initial sub-model, the sample sub-training set and a corresponding sample identity tag are used as training data, the initial sub-models are trained, a pre-training sub-model is obtained, and then a pre-training face recognition model is obtained, and the relation between the pre-training face recognition model and the pre-training sub-model is as follows:
wherein M is i The sub-model is pre-trained for the ith.
In addition, referring to fig. 5, in an embodiment, step S130 in the embodiment shown in fig. 1 includes, but is not limited to, the following steps:
step S510, when the first number of candidate deletion requests in the request set is greater than or equal to a preset number threshold, determining a second number of candidate deletion requests corresponding to each sample sub-training set according to the request set;
step S520, taking a sample sub-training set corresponding to a second number with the largest value as a target sample sub-training set;
in step S530, the candidate deletion request corresponding to the target sample sub-training set is used as the target deletion request.
It may be appreciated that when the first number is greater than or equal to the number threshold, a target sample sub-training set and a target deletion request need to be determined, first, according to a sample face image corresponding to a candidate deletion request in the request set, then, traversing the sample sub-training set, determining a second number of candidate deletion requests corresponding to each sample sub-training set, then, in each sample sub-training set, using the sample sub-training set with the largest second number as the target sample sub-training set, and using a pre-training sub-model corresponding to the target sample sub-training set as the target model in the current retraining process because the number of candidate deletion requests corresponding to the target sample sub-training set is the largest, and determining the target deletion request.
In addition, since the candidate deletion requests are in one-to-one correspondence with the sample face images, in one sample sub-training set, the sample face images having a correspondence with each candidate deletion request in the sample sub-training set can be determined, and the number of the sample face images having a correspondence is taken as the second number, so that the candidate deletion request corresponding to the sample sub-training set refers to a candidate deletion request having a correspondence with the sample sub-training set in the request set, and the target deletion request refers to a candidate deletion request having a correspondence with the target sample sub-training set in the request set.
Additionally, referring to fig. 6, in an embodiment, the target face recognition model includes a plurality of target sub-models, wherein the target sub-models correspond one-to-one with the pre-training sub-models; step S160 in the embodiment shown in fig. 1 includes, but is not limited to, the following steps:
step S610, obtaining a face image to be recognized;
step S620, inputting the face image to be recognized into each target sub-model;
step S630, extracting features of the face image to be identified through the target sub-model, and outputting feature vectors;
and step S640, carrying out fusion processing on the feature vectors output by each target submodel, and carrying out classification processing on the fusion result to obtain a face recognition result.
It can be understood that in the process of recognizing the face image to be recognized, the face image to be recognized is respectively input into each target sub-model, for example, S target sub-models of the banking system respectively output corresponding feature vectors, and then fusion processing is performed on all the feature vectors to obtain a fusion vector, and for the face image x to be recognized input into the banking system, the fusion result is obtained as follows:wherein P (x) is the fusion result, M i (x) And classifying the fusion result through a linear layer to obtain a face recognition result.
FIG. 7 is a flow chart of a model training method according to an embodiment of the present application, as shown in FIG. 7. The model training method includes, but is not limited to, the following steps:
step S710, a pre-training face recognition model and a plurality of sample sub-training sets are obtained, wherein the pre-training face recognition model comprises a plurality of pre-training sub-models, the sample sub-training sets correspond to the pre-training sub-models, the pre-training sub-models are obtained by training the corresponding sample sub-training sets, and any sample sub-training set comprises a plurality of sample face images;
step S720, receiving a plurality of candidate deletion requests, and arranging the received plurality of candidate deletion requests according to a receiving time sequence to form a request set, wherein the candidate deletion requests are in one-to-one correspondence with the sample face images;
step S730, when the first number of candidate deletion requests in the request set is greater than or equal to a preset number threshold, determining a target sample sub-training set in each sample sub-training set according to the request set, and taking the candidate deletion request corresponding to the target sample sub-training set as a target deletion request;
step S740, deleting the target deleting request in the request set, and deleting the corresponding sample face image in the target sample sub-training set according to the target deleting request to obtain a forgetting sub-training set;
Step S750, retraining the corresponding pre-training sub-model according to the forgetting sub-training set, and adjusting the pre-training face recognition model according to the retraining result to obtain the target face recognition model.
It can be understood that the specific implementation mode of the model training method is based on the same inventive concept as the face recognition method, therefore, the model training method divides the pre-training face recognition model into a plurality of pre-training sub-models, and obtains the sample sub-training sets corresponding to the pre-training sub-models, then receives the candidate deletion request, after receiving the candidate deletion request exceeding the number threshold, determines the target sample sub-training set in the sample sub-training set according to the request set, and determines the target deletion request, further deletes the sample face image corresponding to the target deletion request in the target sample sub-training set, obtains the forgetting sub-training set, further retrains the pre-training sub-model by utilizing the forgetting sub-training set, and can effectively reduce the number of times of retraining the model forgetting when the candidate deletion request is larger than the number threshold, and reduce the calculation cost of the model forgetting process.
It should be noted that, the detailed principles of the steps S710 to S750 can be referred to the explanation of the steps S110 to S150, and are not repeated here.
In addition, referring to fig. 8, the present application also provides a face recognition apparatus 800, including:
a first obtaining unit 810, configured to obtain a pre-training face recognition model and a plurality of sample sub-training sets, where the pre-training face recognition model includes a plurality of pre-training sub-models, the sample sub-training sets correspond to the pre-training sub-models, the pre-training sub-models are obtained by training corresponding sample sub-training sets, and any sample sub-training set includes a plurality of sample face images;
a first receiving unit 820, configured to receive a plurality of candidate deletion requests, and arrange the received plurality of candidate deletion requests according to a receiving time sequence to form a request set, where the candidate deletion requests are in one-to-one correspondence with the sample face images;
a first judging unit 830, configured to determine, according to the request set, a target sample sub-training set in each sample sub-training set when a first number of candidate deletion requests in the request set is greater than or equal to a preset number threshold, and take, as a target deletion request, a candidate deletion request corresponding to the target sample sub-training set;
A first deleting unit 840, configured to delete the target deleting request in the request set, and delete the corresponding sample face image in the target sample sub-training set according to the target deleting request, so as to obtain a forgetting sub-training set;
a first retraining unit 850, configured to retrain the corresponding retraining sub-model according to the forgetting sub-training set, and perform adjustment processing on the retraining face recognition model according to the retraining result, so as to obtain a target face recognition model;
the first recognition unit 860 is configured to obtain a face image to be recognized, and input the face image to be recognized into a target face recognition model, so as to obtain a face recognition result.
It is to be understood that the specific embodiment of the face recognition device 800 is substantially the same as the specific embodiment of the face recognition method described above, and will not be described herein.
In addition, referring to fig. 9, the present application further provides a model training apparatus 900, including:
a second obtaining unit 910, configured to obtain a pre-training face recognition model and a plurality of sample sub-training sets, where the pre-training face recognition model includes a plurality of pre-training sub-models, the sample sub-training sets correspond to the pre-training sub-models, the pre-training sub-models are obtained by training corresponding sample sub-training sets, and any sample sub-training set includes a plurality of sample face images;
A second receiving unit 920, configured to receive a plurality of candidate deletion requests, and arrange the received plurality of candidate deletion requests according to a receiving time sequence to form a request set, where the candidate deletion requests are in one-to-one correspondence with the sample face images;
a second judging unit 930, configured to determine, according to the request set, a target sample sub-training set in each sample sub-training set when the first number of candidate deletion requests in the request set is greater than or equal to a preset number threshold, and take, as a target deletion request, a candidate deletion request corresponding to the target sample sub-training set;
a second deleting unit 940, configured to delete the target deletion request in the request set, and delete the corresponding sample face image in the target sample sub-training set according to the target deletion request, to obtain a forgetting sub-training set;
the second retraining unit 950 is configured to retrain the corresponding pre-trained sub-model according to the forgetting sub-training set, and perform adjustment processing on the pre-trained face recognition model according to the retraining result, so as to obtain the target face recognition model.
It is to be understood that the specific embodiment of the model training apparatus 900 is substantially the same as the specific embodiment of the model training method described above, and will not be described herein.
In addition, referring to fig. 10, fig. 10 illustrates a hardware structure of an electronic device of another embodiment, the electronic device including:
the processor 1001 may be implemented by a general-purpose CPU (Central Processing Unit ), a microprocessor, an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical scheme provided by the embodiments of the present application;
the Memory 1002 may be implemented in the form of a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a random access Memory (Random Access Memory, RAM). The memory 1002 may store an operating system and other application programs, and when the technical solution provided in the embodiments of the present specification is implemented by software or firmware, relevant program codes are stored in the memory 1002 and the processor 1001 invokes the face recognition method to perform the embodiments of the present application, for example, perform the above-described method steps S110 to S160 in fig. 1, the method steps S210 to S240 in fig. 2, the method steps S310 to S320 in fig. 3, the method steps S410 to S440 in fig. 4, the method steps S510 to S530 in fig. 5, the method steps S610 to S640 in fig. 6, or perform the model training method of the embodiments of the present application, for example, perform the above-described method steps S710 to S750 in fig. 7;
An input/output interface 1003 for implementing information input and output;
the communication interface 1004 is configured to implement communication interaction between the present device and other devices, and may implement communication in a wired manner (e.g. USB, network cable, etc.), or may implement communication in a wireless manner (e.g. mobile network, WIFI, bluetooth, etc.);
a bus 1005 for transferring information between the various components of the device (e.g., the processor 1001, memory 1002, input/output interface 1003, and communication interface 1004);
wherein the processor 1001, the memory 1002, the input/output interface 1003, and the communication interface 1004 realize communication connection between each other inside the device through the bus 1005.
The embodiment of the present application also provides a storage medium, which is a computer-readable storage medium, for computer-readable storage, where the storage medium stores one or more programs, and the one or more programs may be executed by one or more processors to implement the above-described face recognition method, for example, perform the above-described method steps S110 to S160 in fig. 1, the above-described method steps S210 to S240 in fig. 2, the above-described method steps S310 to S320 in fig. 3, the above-described method steps S410 to S440 in fig. 4, the above-described method steps S510 to S530 in fig. 5, the above-described method steps S610 to S640 in fig. 6, or implement the above-described model training method, for example, perform the above-described method steps S710 to S750 in fig. 7.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The face recognition method, the device, the equipment and the storage medium provided by the embodiment of the application are characterized in that a pre-training face recognition model and a plurality of sample sub-training sets are obtained, wherein the pre-training face recognition model comprises a plurality of pre-training sub-models, the sample sub-training sets correspond to the pre-training sub-models, the pre-training sub-models are obtained by training the corresponding sample sub-training sets, and any sample sub-training set comprises a plurality of sample face images; receiving a plurality of candidate deletion requests, and arranging the received plurality of candidate deletion requests according to a receiving time sequence to form a request set, wherein the candidate deletion requests are in one-to-one correspondence with the sample face images; when the first number of candidate deletion requests in the request set is greater than or equal to a preset number threshold, determining a target sample sub-training set in each sample sub-training set according to the request set, and taking the candidate deletion requests corresponding to the target sample sub-training set as target deletion requests; deleting the target deleting request in the request set, deleting the corresponding sample face image in the target sample sub-training set according to the target deleting request, and obtaining a forgetting sub-training set; retraining a corresponding pre-training sub-model according to the forgetting sub-training set, and adjusting the pre-training face recognition model according to the retraining result to obtain a target face recognition model; and acquiring a face image to be recognized, and inputting the face image to be recognized into a target face recognition model to obtain a face recognition result. Based on the method, the pre-training face recognition model is divided into a plurality of pre-training sub-models, sample sub-training sets corresponding to the pre-training sub-models are obtained, candidate deletion requests are received, after the candidate deletion requests exceeding a quantity threshold are received, target sample sub-training sets are determined in the sample sub-training sets according to the request sets, and target deletion requests are determined, further, sample face images corresponding to the target deletion requests are deleted in the target sample sub-training sets, so that a forgetting sub-training set is obtained, the pre-training sub-models are retrained by utilizing the forgetting sub-training sets, the model forgetting operation is carried out only when the quantity threshold is set, the calculation cost of the model forgetting process can be effectively reduced, in addition, when the model forgetting operation is carried out each time, only the forgetting sub-training sub-models corresponding to the forgetting sub-training sets are utilized, the calculation cost of the model forgetting process is not required to be processed, the calculation cost of the model forgetting process can be further reduced, the training efficiency is effectively improved, and the service processing efficiency of a financial service system is further improved, and the user experience is ensured.
The embodiments described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
It will be appreciated by those skilled in the art that the solutions shown in fig. 1-7 are not limiting on the embodiments of the application and may include more or fewer steps than shown, or certain steps may be combined, or different steps.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is merely a logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including multiple instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing a program.
The preferred embodiments of the present application have been described above with reference to the accompanying drawings, and are not thereby limiting the scope of the claims of the embodiments of the present application. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present application shall fall within the scope of the claims of the embodiments of the present application.

Claims (10)

1. A face recognition method, comprising:
the method comprises the steps of obtaining a pre-training face recognition model and a plurality of sample sub-training sets, wherein the pre-training face recognition model comprises a plurality of pre-training sub-models, the sample sub-training sets correspond to the pre-training sub-models, the pre-training sub-models are obtained by training the corresponding sample sub-training sets, and any sample sub-training set comprises a plurality of sample face images;
receiving a plurality of candidate deletion requests, and arranging the received candidate deletion requests according to a receiving time sequence to form a request set, wherein the candidate deletion requests are in one-to-one correspondence with the sample face images;
when the first number of the candidate deletion requests in the request set is greater than or equal to a preset number threshold, determining a target sample sub-training set in each sample sub-training set according to the request set, and taking the candidate deletion requests corresponding to the target sample sub-training set as target deletion requests;
deleting the target deleting request in the request set, deleting the corresponding sample face image in the target sample sub-training set according to the target deleting request, and obtaining a forgetting sub-training set;
Retraining the corresponding pre-training sub-model according to the forgetting sub-training set, and adjusting the pre-training face recognition model according to the retraining result to obtain a target face recognition model;
and acquiring a face image to be recognized, and inputting the face image to be recognized into the target face recognition model to obtain a face recognition result.
2. The method according to claim 1, wherein retraining the corresponding pre-trained sub-model according to the forgetting sub-training set and adjusting the pre-trained face recognition model according to a retraining result to obtain a target face recognition model includes:
detecting whether the candidate deletion request is received or not until a stop instruction is received;
if the candidate deletion request is received, inserting the candidate deletion request into the request set according to the sequence of receiving time, and re-executing the steps of determining a target sample sub-training set in each sample sub-training set according to the request set when the first number of the candidate deletion requests in the request set is larger than or equal to a preset number threshold, and taking the candidate deletion request corresponding to the target sample sub-training set as a target deletion request to delete the corresponding sample face images in the target sample sub-training set to obtain a plurality of forgetting sub-training sets;
Sequentially retraining the corresponding pre-training sub-models according to each forgetting sub-training set to obtain forgetting sub-models corresponding to each pre-training sub-model;
and according to all the forgetting sub-models, adjusting the pre-training face recognition model to obtain a target face recognition model.
3. The method of claim 1, wherein the obtaining a pre-trained face recognition model and a plurality of sample sub-training sets comprises:
acquiring a pre-training face recognition model and a sample training set;
and splitting the sample training set according to the number of the preset submodels to obtain a plurality of sample submanices, wherein the number of the sample submanices is the number of the submodels, and any two sample submanices have no intersection.
4. A method according to claim 3, wherein the pre-trained face recognition model is trained by:
acquiring an initial face recognition model and sample identity labels of the sample face images;
splitting the initial face recognition model according to the number of the sub-models to obtain a plurality of initial sub-models, wherein the initial sub-models are in one-to-one correspondence with the sample sub-training sets;
Training the initial sub-model according to the corresponding sample sub-training set and the sample identity label aiming at any initial sub-model to obtain the pre-training sub-model;
and obtaining a pre-training face recognition model according to all the pre-training sub-models.
5. The method according to claim 1, wherein when the first number of the candidate deletion requests in the request set is greater than or equal to a preset number threshold, determining, according to the request set, a target sample sub-training set in each of the sample sub-training sets, and taking the candidate deletion request corresponding to the target sample sub-training set as a target deletion request, includes:
when the first number of the candidate deletion requests in the request set is greater than or equal to a preset number threshold, determining the second number of the candidate deletion requests corresponding to each sample sub-training set according to the request set;
taking the sample sub-training set corresponding to the second number with the largest value as a target sample sub-training set;
and taking the candidate deletion request corresponding to the target sample sub-training set as a target deletion request.
6. The method of claim 1, wherein the target face recognition model comprises a plurality of target sub-models, wherein the target sub-models correspond one-to-one with the pre-training sub-models; the step of obtaining the face image to be recognized, and inputting the face image to be recognized into the target face recognition model to obtain a face recognition result, comprises the following steps:
acquiring a face image to be identified;
inputting the face image to be recognized into each target sub-model;
extracting features of the face image to be identified through the target sub-model, and outputting feature vectors;
and carrying out fusion processing on the feature vectors output by each target submodel, and carrying out classification processing on fusion results to obtain face recognition results.
7. A method of model training, comprising:
the method comprises the steps of obtaining a pre-training face recognition model and a plurality of sample sub-training sets, wherein the pre-training face recognition model comprises a plurality of pre-training sub-models, the sample sub-training sets correspond to the pre-training sub-models, the pre-training sub-models are obtained by training the corresponding sample sub-training sets, and any sample sub-training set comprises a plurality of sample face images;
Receiving a plurality of candidate deletion requests, and arranging the received candidate deletion requests according to a receiving time sequence to form a request set, wherein the candidate deletion requests are in one-to-one correspondence with the sample face images;
when the first number of the candidate deletion requests in the request set is greater than or equal to a preset number threshold, determining a target sample sub-training set in each sample sub-training set according to the request set, and taking the candidate deletion requests corresponding to the target sample sub-training set as target deletion requests;
deleting the target deleting request in the request set, deleting the corresponding sample face image in the target sample sub-training set according to the target deleting request, and obtaining a forgetting sub-training set;
and retraining the corresponding pre-trained sub-model according to the forgetting sub-training set, and adjusting the pre-trained face recognition model according to the retraining result to obtain a target face recognition model.
8. A face recognition device, comprising:
the face recognition training system comprises a first acquisition unit, a second acquisition unit and a training unit, wherein the first acquisition unit is used for acquiring a pre-training face recognition model and a plurality of sample sub-training sets, the pre-training face recognition model comprises a plurality of pre-training sub-models, the sample sub-training sets correspond to the pre-training sub-models, the pre-training sub-models are obtained by training the corresponding sample sub-training sets, and any one of the sample sub-training sets comprises a plurality of sample face images;
The first receiving unit is used for receiving a plurality of candidate deletion requests and arranging the received candidate deletion requests according to the receiving time sequence to form a request set, wherein the candidate deletion requests are in one-to-one correspondence with the sample face images;
a first judging unit, configured to determine, according to the request set, a target sample sub-training set in each sample sub-training set when a first number of the candidate deletion requests in the request set is greater than or equal to a preset number threshold, and take the candidate deletion request corresponding to the target sample sub-training set as a target deletion request;
the first deleting unit is used for deleting the target deleting request in the request set, deleting the corresponding sample face image in the target sample sub-training set according to the target deleting request, and obtaining a forgetting sub-training set;
the first retraining unit is used for retraining the corresponding pre-training sub-model according to the forgetting sub-training set, and adjusting the pre-training face recognition model according to a retraining result to obtain a target face recognition model;
the first recognition unit is used for acquiring a face image to be recognized, inputting the face image to be recognized into the target face recognition model, and obtaining a face recognition result.
9. An electronic device comprising a memory storing a computer program and a processor that when executing the computer program implements the face recognition method of any one of claims 1 to 6 or the model training method of claim 7.
10. A storage medium storing a computer program which, when executed by a processor, implements the face recognition method according to any one of claims 1 to 6, or the model training method according to claim 7.
CN202310720333.0A 2023-06-16 2023-06-16 Face recognition method, device, equipment and storage medium Pending CN116704581A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310720333.0A CN116704581A (en) 2023-06-16 2023-06-16 Face recognition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310720333.0A CN116704581A (en) 2023-06-16 2023-06-16 Face recognition method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116704581A true CN116704581A (en) 2023-09-05

Family

ID=87835483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310720333.0A Pending CN116704581A (en) 2023-06-16 2023-06-16 Face recognition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116704581A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117711078A (en) * 2023-12-13 2024-03-15 西安电子科技大学广州研究院 Model forgetting method for face recognition system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117711078A (en) * 2023-12-13 2024-03-15 西安电子科技大学广州研究院 Model forgetting method for face recognition system

Similar Documents

Publication Publication Date Title
CN111898696B (en) Pseudo tag and tag prediction model generation method, device, medium and equipment
WO2021203863A1 (en) Artificial intelligence-based object detection method and apparatus, device, and storage medium
CN112685565A (en) Text classification method based on multi-mode information fusion and related equipment thereof
CN108268629B (en) Image description method and device based on keywords, equipment and medium
CN112330331A (en) Identity verification method, device and equipment based on face recognition and storage medium
CN114241459B (en) Driver identity verification method and device, computer equipment and storage medium
CN115640394A (en) Text classification method, text classification device, computer equipment and storage medium
CN114912537B (en) Model training method and device, behavior prediction method and device, equipment and medium
CN112733645A (en) Handwritten signature verification method and device, computer equipment and storage medium
CN116704581A (en) Face recognition method, device, equipment and storage medium
CN113704623A (en) Data recommendation method, device, equipment and storage medium
CN113128526B (en) Image recognition method and device, electronic equipment and computer-readable storage medium
CN112926341A (en) Text data processing method and device
CN116543798A (en) Emotion recognition method and device based on multiple classifiers, electronic equipment and medium
CN116383478A (en) Transaction recommendation method, device, equipment and storage medium
CN110929767A (en) Font processing method, system, device and medium
CN114581706B (en) Method and device for configuring certificate recognition model, electronic equipment and storage medium
CN115049073A (en) Model training method and device, scoring method and device, equipment and medium
CN117575894B (en) Image generation method, device, electronic equipment and computer readable storage medium
CN112949317B (en) Text semantic recognition method and device, computer equipment and storage medium
CN114647733B (en) Question and answer corpus evaluation method and device, computer equipment and storage medium
CN114417875B (en) Data processing method, apparatus, device, readable storage medium, and program product
CN113792342B (en) Desensitization data reduction method, device, computer equipment and storage medium
CN114385814A (en) Information retrieval method and device, computer equipment and storage medium
CN116775187A (en) Data display method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination