CN116597495A - Model updating method and device, face recognition method and device and storage medium - Google Patents

Model updating method and device, face recognition method and device and storage medium Download PDF

Info

Publication number
CN116597495A
CN116597495A CN202310638481.8A CN202310638481A CN116597495A CN 116597495 A CN116597495 A CN 116597495A CN 202310638481 A CN202310638481 A CN 202310638481A CN 116597495 A CN116597495 A CN 116597495A
Authority
CN
China
Prior art keywords
model
face recognition
face image
image data
recognition model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310638481.8A
Other languages
Chinese (zh)
Inventor
洪振厚
王健宗
瞿晓阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202310638481.8A priority Critical patent/CN116597495A/en
Publication of CN116597495A publication Critical patent/CN116597495A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Medical Informatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application provides a model updating method and device, a face recognition method and device and a storage medium, and belongs to the technical field of financial science and technology. The method comprises the following steps: carrying out data division on the sample face image data to obtain training face image data and test face image data; acquiring an original face recognition model, wherein the original face recognition model comprises a recognition network; acquiring first position distribution data and first unit quantity of original hidden layer units of an identification network; determining a second unit number and second position distribution data of the newly added hidden layer unit based on the first position distribution data and the first unit number; model training is carried out on the original face recognition model based on the training face image data, the second unit number and the second position distribution data, and an initial face recognition model is obtained; and performing model optimization on the initial face recognition model based on the test face image data to obtain a target face recognition model. The application can improve the accuracy of the model on face recognition.

Description

Model updating method and device, face recognition method and device and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a method and apparatus for updating a model, a face recognition method, a device, and a storage medium.
Background
With the gradual maturation of pattern recognition technology, the biological recognition of biological individuals based on biological signs is beginning to be applied and promoted in the field of identity recognition, and many paystations have already deduced shortcut payment methods such as face-brushing payment based on face recognition.
When the existing face recognition method is used for carrying out image recognition on the collected face image on the edge equipment based on the model, the face visual information is often difficult to comprehensively extract, the identity of the person corresponding to the face image cannot be directly and accurately recognized, and the problem of low face recognition accuracy exists.
Disclosure of Invention
The embodiment of the application mainly aims to provide a training method and device for a voice conversion model, electronic equipment and a storage medium, and aims to improve the accuracy of the model on face recognition.
To achieve the above object, a first aspect of an embodiment of the present application provides a method for updating a model, including:
obtaining sampled face image data;
carrying out data division on the sample face image data to obtain training face image data and test face image data;
Acquiring an original face recognition model, wherein the original face recognition model comprises a feature extraction network and a recognition network;
acquiring first position distribution data and first unit quantity of original hidden layer units of the identification network;
determining a second unit number and second position distribution data of the newly added hidden layer unit based on the first position distribution data and the first unit number;
model training is carried out on the original face recognition model based on the training face image data, the second unit number of the newly added hidden layer units and the second position distribution data, and an initial face recognition model is obtained;
and carrying out model optimization on the initial face recognition model based on the test face image data to obtain a target face recognition model, wherein the target face recognition model is used for carrying out recognition processing on a target face image of a target object to obtain character identity information of the target object.
In some embodiments, the data dividing the sample face image data to obtain training face image data and test face image data includes:
performing image brightness adjustment on the sample face image data to obtain middle face image data;
Performing pixel normalization on the intermediate face image data to obtain initial face image data;
and carrying out data division on the initial face image data according to preset proportion parameters to obtain the training face image data and the test face image data.
In some embodiments, the training the original face recognition model based on the training face image data and the second unit number and the second position distribution data of the newly added hidden layer unit to obtain an initial face recognition model includes:
performing model enhancement on the original face recognition model according to the second unit number and the second position distribution data to obtain an intermediate face recognition model;
model training is carried out on the intermediate face recognition model based on the training face image data, and a model loss value is obtained;
and carrying out parameter updating on the original face recognition model according to the model loss value to obtain the initial face recognition model.
In some embodiments, the model optimizing the initial face recognition model based on the test face image data to obtain a target face recognition model includes:
performing model test on the initial face recognition model based on the test face image data to obtain a test result;
And carrying out model optimization on the initial face recognition model according to the test result to obtain the target face recognition model.
To achieve the above object, a second aspect of an embodiment of the present application provides a face recognition method, including:
acquiring a target face image of a target object;
and inputting the target face image into a target face recognition model for face recognition to obtain the character identity information of the target object, wherein the target face recognition model is obtained according to the model updating method of the first aspect.
In some embodiments, the target face recognition model includes a feature extraction network and a recognition network, and the inputting the target face image into the target face recognition model to perform face recognition to obtain person identity information of the target object includes:
performing feature extraction on the target face image based on the feature extraction network to obtain a face three-dimensional visual feature;
and carrying out identity recognition on the three-dimensional visual characteristics of the human face based on the recognition network to obtain the character identity information of the target object.
To achieve the above object, a third aspect of the embodiments of the present application provides a model updating apparatus, including:
The data acquisition module is used for acquiring sampled face image data;
the data dividing module is used for carrying out data division on the sample face image data to obtain training face image data and test face image data;
the model acquisition module is used for acquiring an original face recognition model, wherein the original face recognition model comprises a feature extraction network and a recognition network;
a network element data acquisition module, configured to acquire first location distribution data and a first number of units of an original hidden layer unit of the identification network;
the newly added data determining module is used for determining a second unit number and second position distribution data of the newly added hidden layer unit based on the first position distribution data and the first unit number;
the model training module is used for carrying out model training on the original face recognition model based on the training face image data, the second unit number of the newly added hidden layer units and the second position distribution data to obtain an initial face recognition model;
and the model optimization module is used for carrying out model optimization on the initial face recognition model based on the test face image data to obtain a target face recognition model, and the target face recognition model is used for carrying out recognition processing on a target face image of a target object to obtain character identity information of the target object.
To achieve the above object, a fourth aspect of the embodiments of the present application provides a face recognition device, including:
the image acquisition module is used for acquiring a target face image of a target object;
and the face recognition module is used for inputting the target face image into a target face recognition model for face recognition to obtain the character identity information of the target object, wherein the target face recognition model is obtained according to the model updating device.
To achieve the above object, a fifth aspect of the embodiments of the present application proposes an electronic device, the electronic device including a memory, a processor, the memory storing a computer program, the processor implementing the method according to the first aspect or the method according to the second aspect when executing the computer program.
To achieve the above object, a sixth aspect of the embodiments of the present application proposes a computer-readable storage medium storing a computer program which, when executed by a processor, implements the method according to the first aspect or the method according to the second aspect.
The application provides a model updating method, a face recognition method, a model updating device, a face recognition device, electronic equipment and a storage medium, which are used for acquiring face image data of a sample person; carrying out data division on the sample face image data to obtain training face image data and test face image data; acquiring an original face recognition model, wherein the original face recognition model comprises a feature extraction network and a recognition network; acquiring first position distribution data and first unit quantity of original hidden layer units of an identification network; determining a second unit number and second position distribution data of the newly added hidden layer unit based on the first position distribution data and the first unit number; model training is carried out on the original face recognition model based on the training face image data, the second unit number of the newly added hidden layer units and the second position distribution data, and an initial face recognition model is obtained; and performing model optimization on the initial face recognition model based on the test face image data to obtain a target face recognition model, wherein the target face recognition model is used for recognizing the target face image of the target object to obtain character identity information of the target object. The method can carry out model enhancement on the original face recognition model based on the newly added hidden layer unit, improves the richness of the network structure of the original face recognition model, can train the performance of the model on face recognition only by introducing the newly added hidden layer unit in the training process under the condition of not changing the whole structure of the model, and can better improve the face recognition accuracy of the model. Meanwhile, model optimization is carried out on the initial face recognition model by using the test face image data, so that the model performance of the model can be further optimized, the target face recognition model can have better face recognition accuracy, further, payment transactions can be completed in different financial transaction occasions by using face brushing payment modes, the applicability of face recognition in financial transaction scenes is improved, and the transaction efficiency of financial transactions based on face recognition is improved.
Drawings
FIG. 1 is a flow chart of a model update method provided by an embodiment of the present application;
fig. 2 is a flowchart of step S102 in fig. 1;
fig. 3 is a flowchart of step S106 in fig. 1;
fig. 4 is a flowchart of step S107 in fig. 1;
fig. 5 is a flowchart of a face recognition method according to an embodiment of the present application;
fig. 6 is a flowchart of step S502 in fig. 5;
FIG. 7 is a schematic structural diagram of a model updating device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a face recognition device according to an embodiment of the present application;
fig. 9 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It should be noted that although functional block division is performed in a device diagram and a logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
First, several nouns involved in the present application are parsed:
artificial intelligence (artificial intelligence, AI): is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding the intelligence of people; artificial intelligence is a branch of computer science that attempts to understand the nature of intelligence and to produce a new intelligent machine that can react in a manner similar to human intelligence, research in this field including robotics, language recognition, image recognition, natural language processing, and expert systems. Artificial intelligence can simulate the information process of consciousness and thinking of people. Artificial intelligence is also a theory, method, technique, and application system that utilizes a digital computer or digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
Natural language processing (natural language processing, NLP): NLP is a branch of artificial intelligence that is a interdisciplinary of computer science and linguistics, and is often referred to as computational linguistics, and is processed, understood, and applied to human languages (e.g., chinese, english, etc.). Natural language processing includes parsing, semantic analysis, chapter understanding, and the like. Natural language processing is commonly used in the technical fields of machine translation, handwriting and print character recognition, voice recognition and text-to-speech conversion, information intent recognition, information extraction and filtering, text classification and clustering, public opinion analysis and opinion mining, and the like, and relates to data mining, machine learning, knowledge acquisition, knowledge engineering, artificial intelligence research, linguistic research related to language calculation, and the like.
Pooling (Pooling): the method is essentially sampling, and performs dimension reduction processing and compression processing on an input characteristic diagram in a certain mode so as to increase the operation speed, and adopts more Pooling processes as maximum Pooling (Max Pooling).
Activation function (Activation Function): is a function running on neurons of an artificial neural network, responsible for mapping the inputs of the neurons to the outputs.
Encoding (Encoder): the input sequence is converted into a vector of fixed length.
Decoding (Decoder): the fixed vector generated before is converted into an output sequence; wherein the input sequence can be words, voice, images and video; the output sequence may be text, images.
With the gradual maturation of pattern recognition technology, the biological recognition of biological individuals based on biological signs is beginning to be applied and promoted in the field of identity recognition, and many paystations have already deduced shortcut payment methods such as face-brushing payment based on face recognition.
For example, when shopping online, after the object selects and additionally purchases the commodity to be purchased, the payment can be performed in a face-brushing payment manner in the payment link, and the shooting equipment of the object terminal such as a mobile phone is used for collecting the face photo of the object and completing the payment. Compared with the password payment and other modes, the payment efficiency can be better improved.
When the existing face recognition method is used for carrying out image recognition on the collected face image on the edge equipment based on the model, the face visual information is often difficult to comprehensively extract, the identity of the person corresponding to the face image cannot be directly and accurately recognized, and the problem of low face recognition accuracy exists.
Based on the above, the embodiment of the application provides a model updating method, a face recognition method, a model updating device, electronic equipment and a storage medium, which aim to improve the accuracy of a model on face recognition.
The model updating method, the face recognition method, the model updating device, the electronic equipment and the storage medium provided by the embodiment of the application are specifically described through the following embodiments, and the model updating method in the embodiment of the application is described first.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The embodiment of the application provides a model updating method and a face recognition method, and relates to the technical field of artificial intelligence. The model updating method and the face recognition method provided by the embodiment of the application can be applied to the terminal, can be applied to the server side, and can also be software running in the terminal or the server side. In some embodiments, the terminal may be a smart phone, tablet, notebook, desktop, etc.; the server side can be configured as an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms and the like; the software may be an application or the like that implements the model updating method and the face recognition method, but is not limited to the above form.
The application is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
Fig. 1 is an optional flowchart of a method for updating a model according to an embodiment of the present application, where the method in fig. 1 may include, but is not limited to, steps S101 to S107.
Step S101, acquiring sample face image data;
step S102, carrying out data division on sample face image data to obtain training face image data and test face image data;
step S103, an original face recognition model is obtained, wherein the original face recognition model comprises a feature extraction network and a recognition network;
step S104, obtaining first position distribution data and first unit quantity of original hidden layer units of an identification network;
step S105, determining a second unit number and second position distribution data of the newly added hidden layer unit based on the first position distribution data and the first unit number;
step S106, model training is carried out on the original face recognition model based on the training face image data, the second unit number of the newly added hidden layer units and the second position distribution data, and an initial face recognition model is obtained;
step S107, model optimization is performed on the initial face recognition model based on the test face image data to obtain a target face recognition model, and the target face recognition model is used for performing recognition processing on a target face image of a target object to obtain character identity information of the target object.
Step S101 to step S107 shown in the embodiment of the application, the face image data of the person is obtained by sampling; carrying out data division on the sample face image data to obtain training face image data and test face image data; acquiring an original face recognition model, wherein the original face recognition model comprises a feature extraction network and a recognition network; acquiring first position distribution data and first unit quantity of original hidden layer units of an identification network; determining a second unit number and second position distribution data of the newly added hidden layer unit based on the first position distribution data and the first unit number; model training is carried out on the original face recognition model based on the training face image data, the second unit number of the newly added hidden layer units and the second position distribution data, and an initial face recognition model is obtained; and performing model optimization on the initial face recognition model based on the test face image data to obtain a target face recognition model, wherein the target face recognition model is used for recognizing the target face image of the target object to obtain character identity information of the target object. The method can carry out model enhancement on the original face recognition model based on the newly added hidden layer unit, improves the richness of the network structure of the original face recognition model, can train the performance of the model on face recognition only by introducing the newly added hidden layer unit in the training process under the condition of not changing the whole structure of the model, and can better improve the face recognition accuracy of the model. Meanwhile, model optimization is carried out on the initial face recognition model by using the test face image data, so that the model performance of the model can be further optimized, and the target face recognition model can have better face recognition accuracy.
In step S101 of some embodiments, data may be crawled in a targeted manner after a data source is set by writing a web crawler to obtain sample face image data, where the data source may be a character image database or the like; the sample face image data of different persons may be obtained by shooting with shooting tools such as video cameras and still cameras, or the sample face image data may be obtained by other methods, which is not limited thereto. Wherein the sample person image data includes different sample person images and sample person identity tags of the sample person images, the sample person identity tags being used to indicate person identities corresponding to the sample person images, such as age, gender, etc., of the sample person.
Referring to fig. 2, in some embodiments, step S102 may include, but is not limited to, steps S201 to S203:
step S201, adjusting the image brightness of the sample face image data to obtain middle face image data;
step S202, performing pixel normalization on intermediate face image data to obtain initial face image data;
step S203, data division is carried out on the initial face image data according to preset proportion parameters, and training face image data and test face image data are obtained.
In step S201 of some embodiments, when performing image brightness adjustment on sample face image data, brightness may be adjusted by setting a gamma value, where the intermediate face image data includes an intermediate face image and a sample person identity tag corresponding to the intermediate face image, and the brightness adjustment process may be represented as shown in formula (1):
I′=I g formula (1)
Wherein, I' is the pixel value of the middle face image in the middle face image data, I is the pixel value of the sample face image in the sample face image data, g is the gamma value, when g is larger than 1, the middle face image is darker than the sample face image, when g is smaller than 1, the middle face image is brighter than the sample face image, when g is equal to 1, the brightness of the middle face image and the sample face image is the same, and according to actual needs, g is generally taken in the range of 0.5 to 2.
It should be noted that, when the image brightness of the sample face image data is adjusted, the contrast of the sample face image may also be adjusted to obtain intermediate face image data, where the intermediate face image data includes an intermediate face image and a sample person identity tag corresponding to the intermediate face image, and the process of adjusting the contrast may be represented as shown in formula (2):
I' =log (I) formula (2)
Wherein I' is a pixel value of the intermediate face image in the intermediate face image data, and I is a pixel value of the sample face image in the sample face image data.
In step S202 of some embodiments, pixel normalization processing may be performed on the intermediate face image by using a maximum-minimum normalization method, so as to obtain initial face image data, where the initial face image data includes an initial face image and a sample person identity tag corresponding to the initial face image. Wherein, the normalization formula is shown as formula (3):
wherein x is i The pixel point value of the middle face image is max (x), the pixel maximum value of the middle face image is max (x), and the pixel minimum value of the middle face image is min (x).
In step S203 of some embodiments, the preset scale parameter may be set according to the actual situation, for example, the scale parameter may be 7:3, that is, the initial face image data is divided into two parts according to the scale parameter, where one part is used as training face image data, and the other part is used as test face image data.
Through the steps S201 to S203, the sample face image can be subjected to image preprocessing, irrelevant information in the sample face image is eliminated, useful real information is recovered, the detectability of relevant information is enhanced, data can be simplified to the greatest extent, the data quality of training face image data for model training is improved, and accordingly the reliability of the model on face recognition is improved.
In step S103 of some embodiments, an original face recognition model is obtained, which may be constructed based on a convolutional neural network model such as Mobile Net, where the original face recognition model includes a feature extraction network and an identification network. Specifically, the feature extraction network is mainly used for extracting visual features of an input face image to obtain three-dimensional visual features corresponding to the face image, and the recognition network is mainly used for carrying out person identity recognition according to the extracted three-dimensional visual features to determine person identities corresponding to the face image. The feature extraction network comprises a first depth separable convolution layer, a first maximum pooling layer, a second depth separable convolution layer, a second maximum pooling layer, a third depth separable convolution layer and a third maximum pooling layer, wherein the convolution kernel sizes of the first depth separable convolution layer, the second depth separable convolution layer and the third depth separable convolution layer are 3 multiplied by 3, the step length is 2, and the number of output channels is 64; the convolution kernel sizes of the first maximum pooling layer, the second maximum pooling layer and the third maximum pooling layer are 2×2, the step size is 2, and the depth separable convolution layers and the maximum pooling layers are alternately connected. The recognition network comprises a full connection layer, an average pooling layer and a softmax function, wherein the size of the characteristic dimension of the full connection layer is set according to actual conditions, and the full connection layer comprises a plurality of original hidden layer units. Because the structure of the depth separable convolution layer is simpler, the whole model structure of the original face recognition model is simpler, and the applicability of the model deployment on edge equipment is improved.
In step S104 of some embodiments, first location distribution data of original hidden layer units of the identification network and a first unit number may be obtained through a preset script program or the like, where the first location distribution data includes a coordinate location where each original hidden layer unit is located on a preset coordinate system, and the first unit number is a total number of the original hidden layer units.
In step S105 of some embodiments, based on the first location distribution data and the first number of units, it is determined whether there is a space that can increase the hidden layer unit in the left and right directions of each original hidden layer unit, and the second number of units and the second location distribution data of the newly added hidden layer unit are determined according to the spatial saturation condition of the left and right directions of the original hidden layer unit.
For example, the fully-connected layer includes six original hidden layer units, two-by-two, and the six original hidden layer units are arranged in three layers, the first layer is A, B, the second layer is C, D, and the third layer is D, E, so that the space of the hidden layer units can be increased to be two positions of the left side of the original hidden layer unit a, the right side of the original hidden layer unit B, the left side of the original hidden layer unit C, the right side of the original hidden layer unit D, the left side of the original hidden layer unit E, i.e., six positions exist, and newly added hidden layer units can be added, so that the number of second units of the newly added hidden layer units can be a value between 1 and 6, and the second position distribution data of the newly added hidden layer units includes two positions of the left side of the original hidden layer unit a, the right side of the original hidden layer unit B, the left side of the original hidden layer unit C, the right side of the original hidden layer unit D, and the left side of the original hidden layer unit E.
Referring to fig. 3, in some embodiments, step S106 may include, but is not limited to, steps S301 to S303:
step S301, performing model enhancement on the original face recognition model according to the second unit number and the second position distribution data to obtain an intermediate face recognition model;
step S302, model training is carried out on the intermediate face recognition model based on training face image data, and a model loss value is obtained;
and step S303, carrying out parameter updating on the original face recognition model according to the model loss value to obtain the initial face recognition model.
In step S301 of some embodiments, according to the value range of the second unit number and the selectable positions of the second position distribution data, a plurality of combination modes are formed, iteration enhancement is performed on the original face recognition model based on different combination modes, and in each iteration process, a combination mode is selected to add the newly added hidden layer unit to the model structure of the original face recognition model, so as to obtain an intermediate face recognition model.
In step S302 of some embodiments, training face image data is input into an intermediate face recognition model, face recognition processing is performed on the training face image data based on a feature extraction network and a recognition network of the intermediate face recognition model, and model training is performed on the intermediate face recognition model according to a face recognition result and a sample character identity tag of the training face image data to obtain a model loss value, where a loss function of an iterative process of the model training may be represented as shown in formula (4):
Wherein alpha is i Is a super parameter, and is mainly used for adjusting the weight W of the ith newly added hidden layer unit i Loss value of (2)W t The model parameters of the intermediate face recognition model, i is the number of steps needed to be added with network elements, and is specifically set according to the number of the combination modes, and is generally set to be 3.
In step S303 of some embodiments, when updating parameters of an original face recognition model according to model loss values, model parameters of the original face recognition model may be optimized according to model loss values in each iteration process, model parameters in each iteration link may be calculated until iteration times reach an iteration threshold of a threshold, model parameters in an iteration link with a minimum model loss value in each iteration process are compared to be used as final model parameters, parameter adjustment is performed on the original face recognition model according to the model parameters, and an initial face recognition model is obtained, where the optimization process may be expressed as shown in formula (5):
wherein, eta and beta are super parameters which can be set manually and are generally 1.
Through the steps S301 to S303, the original face recognition model can be enhanced based on the newly added hidden layer unit, the richness of the network structure of the original face recognition model is improved, the model parameters and the model loss value in each iteration enhancing process are calculated, the model parameters which can enable the model to be optimal are determined according to the model loss value, the original face recognition model is updated based on the optimal model parameters, and the original face recognition model is obtained.
Referring to fig. 4, in some embodiments, step S107 may include, but is not limited to, steps S401 to S402:
step S401, performing model test on the initial face recognition model based on the test face image data to obtain a test result;
and step S402, performing model optimization on the initial face recognition model according to the test result to obtain a target face recognition model.
In step S401 of some embodiments, the test face image data is input into an initial face recognition model, face recognition processing is performed on the test face image data based on a feature extraction network and a recognition network of the initial face recognition model, a test person identity tag corresponding to each test face image is obtained, the test person identity tags obtained by the test face image through the initial face recognition model are integrated, and the sample person identity tag and the test person identity tag of each test face image are compared, so that a test result is obtained.
In step S402 of some embodiments, according to the consistency probability of the sample person identity tag and the test person identity tag in the test face image, fine tuning is performed on model parameters of the initial face recognition model, so that the consistency probability is greater than a preset threshold, and the model at this time is used as a final target face recognition model. The calculation mode of the consistency probability is mainly to count the total number of images of the test face images, count the total number of qualified images of the test face images with the same sample character identity label and the test character identity label, and divide the total number of qualified images and the total number of images to obtain the consistency probability.
In some embodiments, an Adam optimizer may be employed in fine tuning the model parameters.
Through the steps S401 to S402, model optimization can be performed on the initial face recognition model after training, fine adjustment is performed on model parameters, and face recognition performance of the model is further improved, so that accuracy of the model on face recognition is improved.
The model updating method of the embodiment of the application obtains the face image data of the sample; carrying out data division on the sample face image data to obtain training face image data and test face image data; acquiring an original face recognition model, wherein the original face recognition model comprises a feature extraction network and a recognition network; acquiring first position distribution data and first unit quantity of original hidden layer units of an identification network; determining a second unit number and second position distribution data of the newly added hidden layer unit based on the first position distribution data and the first unit number; model training is carried out on the original face recognition model based on the training face image data, the second unit number of the newly added hidden layer units and the second position distribution data, and an initial face recognition model is obtained; and performing model optimization on the initial face recognition model based on the test face image data to obtain a target face recognition model, wherein the target face recognition model is used for recognizing the target face image of the target object to obtain character identity information of the target object. The method can carry out model enhancement on the original face recognition model based on the newly added hidden layer unit, improves the richness of the network structure of the original face recognition model, can train the performance of the model on face recognition only by introducing the newly added hidden layer unit in the training process under the condition of not changing the whole structure of the model, and can better improve the face recognition accuracy of the model. Meanwhile, model optimization is carried out on the initial face recognition model by using the test face image data, so that the model performance of the model can be further optimized, and the target face recognition model can have better face recognition accuracy. In addition, the model structure of the target face recognition model is simple, and the target face recognition model can be suitable for various application scenes, so that the target face recognition model can be used in a face recognition system at the side end side, the problem that the face recognition performance of a terminal device using a deep learning model is not high can be solved well, further, payment transactions can be completed in different financial transaction occasions in a face-brushing payment mode, the applicability of face recognition in the financial transaction scene is improved, and the transaction efficiency of financial transactions based on face recognition is improved.
Fig. 5 is an optional flowchart of a face recognition method according to an embodiment of the present application, where the method in fig. 1 may include, but is not limited to, steps S501 to S502.
Step S501, a target face image of a target object is acquired;
step S502, inputting the target face image into a target face recognition model for face recognition to obtain the character identity information of the target object, wherein the target face recognition model is obtained according to the model updating method.
In step S501 of some embodiments, a data source may be set by writing a web crawler, and then performing targeted crawling on the data to obtain a target face image of a target object, where the data source may be a character image database or the like; the target face image of the target object may be obtained by shooting with a shooting tool such as a video camera or a still camera, or may be obtained by other means, not limited to this. The target object includes, without limitation, a network user, a user of the mobile device, and the like.
In step S502 of some embodiments, a target face image is input into a target face recognition model, and feature extraction is performed on the target face image based on a feature extraction network, so as to obtain three-dimensional visual features of a face, where the three-dimensional visual features of the face include facial feature information such as eyes, mouth, and facial contours of a target object. Further, identity recognition is carried out on three-dimensional visual characteristics of the face based on a softmax function of a recognition network to obtain character identity information of a target object, so that payment transaction can be completed in different financial transaction occasions in a face-brushing payment mode, applicability of face recognition in financial transaction scenes is improved, and transaction efficiency of financial transaction based on face recognition is improved.
Referring to fig. 6, in some embodiments, the target face recognition model includes a feature extraction network and a recognition network, and step S502 includes, but is not limited to, steps S601 to S602:
step S601, extracting features of a target face image based on a feature extraction network to obtain three-dimensional visual features of the face;
step S602, carrying out identity recognition on the three-dimensional visual characteristics of the human face based on the recognition network to obtain the character identity information of the target object.
In step S601 of some embodiments, multi-level visual feature extraction is performed on an input target face image based on a plurality of depth separable convolution layers and a maximum pooling layer of a feature extraction network, so as to obtain a three-dimensional visual feature corresponding to the target face image, where the three-dimensional visual feature of the face includes facial feature information such as eyes, mouth, and facial contours of a target object.
In step S602 of some embodiments, the extracted three-dimensional visual feature may be mapped based on the full connection layer and the average pooling layer of the identification network, the three-dimensional visual feature is mapped to a preset vector space, the variable-dimensional visual feature is obtained, identity identification is performed on the variable-dimensional visual feature based on the softmax function and a plurality of candidate person identity tags, the person identity tag of the target object with the latest matching candidate person identity tag is selected, and the person information included in the person identity tag is used as the person identity information of the target object.
According to the face recognition method, the face image of the target object is acquired, the face image of the target object is input into the face recognition model for face recognition, the person identity information of the target object is obtained, the person identity of the target object can be obtained through recognition of the image characteristic information of the face image of the target object, and accuracy of person identity recognition of the target object is improved.
Referring to fig. 7, an embodiment of the present application further provides a model updating device, which can implement the above model updating method, where the device includes:
a data acquisition module 701, configured to acquire sampled face image data;
the data dividing module 702 is configured to divide the sample face image data into training face image data and test face image data;
a model obtaining module 703, configured to obtain an original face recognition model, where the original face recognition model includes a feature extraction network and a recognition network;
a network element data acquisition module 704, configured to acquire first location distribution data and a first element number of an original hidden layer element of the identification network;
a new data determining module 705, configured to determine a second unit number and second position distribution data of the new hidden layer unit based on the first position distribution data and the first unit number;
The model training module 706 is configured to perform model training on the original face recognition model based on the training face image data and the second unit number and the second position distribution data of the newly added hidden layer unit, to obtain an initial face recognition model;
the model optimization module 707 is configured to perform model optimization on the initial face recognition model based on the test face image data to obtain a target face recognition model, where the target face recognition model is used to perform recognition processing on a target face image of a target object to obtain person identity information of the target object.
The specific implementation manner of the model updating device is basically the same as that of the specific embodiment of the model updating method, and is not described herein.
Referring to fig. 8, an embodiment of the present application further provides a face recognition device, which can implement the face recognition method, where the device includes:
an image acquisition module 801, configured to acquire a target face image of a target object;
the face recognition module 802 is configured to input a target face image into a target face recognition model for performing face recognition, and obtain person identity information of the target object, where the target face recognition model is obtained according to the model updating device.
The specific implementation of the face recognition device is basically the same as the specific embodiment of the face recognition method, and will not be described herein.
The embodiment of the application also provides electronic equipment, which comprises: the face recognition method comprises a memory, a processor, a program stored in the memory and capable of running on the processor, and a data bus for realizing connection communication between the processor and the memory, wherein the program is executed by the processor to realize the face recognition method. The electronic equipment can be any intelligent terminal including a tablet personal computer, a vehicle-mounted computer and the like.
Referring to fig. 9, fig. 9 illustrates a hardware structure of an electronic device according to another embodiment, the electronic device includes:
the processor 901 may be implemented by a general purpose CPU (central processing unit), a microprocessor, an application specific integrated circuit (ApplicationSpecificIntegratedCircuit, ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solution provided by the embodiments of the present application;
the memory 902 may be implemented in the form of read-only memory (ReadOnlyMemory, ROM), static storage, dynamic storage, or random access memory (RandomAccessMemory, RAM). The memory 902 may store an operating system and other application programs, and when the technical solution provided in the embodiments of the present disclosure is implemented by software or firmware, relevant program codes are stored in the memory 902, and the processor 901 invokes a model update method or a face recognition method for executing the embodiments of the present disclosure;
An input/output interface 903 for inputting and outputting information;
the communication interface 904 is configured to implement communication interaction between the device and other devices, and may implement communication in a wired manner (e.g. USB, network cable, etc.), or may implement communication in a wireless manner (e.g. mobile network, WIFI, bluetooth, etc.);
a bus 905 that transfers information between the various components of the device (e.g., the processor 901, the memory 902, the input/output interface 903, and the communication interface 904);
wherein the processor 901, the memory 902, the input/output interface 903 and the communication interface 904 are communicatively coupled to each other within the device via a bus 905.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to realize the model updating method or the face recognition method.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiment of the application provides a model updating method, a face recognition method, a model updating device, a face recognition device, electronic equipment and a storage medium, which are used for acquiring face image data of a person by sampling the face image data; carrying out data division on the sample face image data to obtain training face image data and test face image data; acquiring an original face recognition model, wherein the original face recognition model comprises a feature extraction network and a recognition network; acquiring first position distribution data and first unit quantity of original hidden layer units of an identification network; determining a second unit number and second position distribution data of the newly added hidden layer unit based on the first position distribution data and the first unit number; model training is carried out on the original face recognition model based on the training face image data, the second unit number of the newly added hidden layer units and the second position distribution data, and an initial face recognition model is obtained; and performing model optimization on the initial face recognition model based on the test face image data to obtain a target face recognition model, wherein the target face recognition model is used for recognizing the target face image of the target object to obtain character identity information of the target object. The method can carry out model enhancement on the original face recognition model based on the newly added hidden layer unit, improves the richness of the network structure of the original face recognition model, can train the performance of the model on face recognition only by introducing the newly added hidden layer unit in the training process under the condition of not changing the whole structure of the model, and can better improve the face recognition accuracy of the model. Meanwhile, model optimization is carried out on the initial face recognition model by using the test face image data, so that the model performance of the model can be further optimized, and the target face recognition model can have better face recognition accuracy. Meanwhile, model optimization is carried out on the initial face recognition model by using the test face image data, so that the model performance of the model can be further optimized, and the target face recognition model can have better face recognition accuracy. In addition, the model structure of the target face recognition model is simple, and the target face recognition model can be suitable for various application scenes, so that the target face recognition model can be used in a face recognition system at the side end side, the problem that the face recognition performance of a terminal device using a deep learning model is not high can be solved well, further, payment transactions can be completed in different financial transaction occasions in a face-brushing payment mode, the applicability of face recognition in the financial transaction scene is improved, and the transaction efficiency of financial transactions based on face recognition is improved.
The embodiments described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
It will be appreciated by those skilled in the art that the solutions shown in fig. 1-6 are not limiting on the embodiments of the application and may include more or fewer steps than shown, or certain steps may be combined, or different steps.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is merely a logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including multiple instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing a program.
The preferred embodiments of the present application have been described above with reference to the accompanying drawings, and are not thereby limiting the scope of the claims of the embodiments of the present application. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present application shall fall within the scope of the claims of the embodiments of the present application.

Claims (10)

1. A method of model updating, the method comprising:
obtaining sampled face image data;
carrying out data division on the sample face image data to obtain training face image data and test face image data;
acquiring an original face recognition model, wherein the original face recognition model comprises a feature extraction network and a recognition network;
acquiring first position distribution data and first unit quantity of original hidden layer units of the identification network;
determining a second unit number and second position distribution data of the newly added hidden layer unit based on the first position distribution data and the first unit number;
model training is carried out on the original face recognition model based on the training face image data, the second unit number of the newly added hidden layer units and the second position distribution data, and an initial face recognition model is obtained;
and carrying out model optimization on the initial face recognition model based on the test face image data to obtain a target face recognition model, wherein the target face recognition model is used for carrying out recognition processing on a target face image of a target object to obtain character identity information of the target object.
2. The method for updating a model according to claim 1, wherein the performing data division on the sample face image data to obtain training face image data and test face image data includes:
performing image brightness adjustment on the sample face image data to obtain middle face image data;
performing pixel normalization on the intermediate face image data to obtain initial face image data;
and carrying out data division on the initial face image data according to preset proportion parameters to obtain the training face image data and the test face image data.
3. The model updating method according to claim 1, wherein the model training the original face recognition model based on the training face image data and the second unit number and the second position distribution data of the newly added hidden layer unit to obtain an initial face recognition model includes:
performing model enhancement on the original face recognition model according to the second unit number and the second position distribution data to obtain an intermediate face recognition model;
model training is carried out on the intermediate face recognition model based on the training face image data, and a model loss value is obtained;
And carrying out parameter updating on the original face recognition model according to the model loss value to obtain the initial face recognition model.
4. A model updating method according to any one of claims 1 to 3, wherein the model optimizing the initial face recognition model based on the test face image data to obtain a target face recognition model comprises:
performing model test on the initial face recognition model based on the test face image data to obtain a test result;
and carrying out model optimization on the initial face recognition model according to the test result to obtain the target face recognition model.
5. A method of face recognition, the method comprising:
acquiring a target face image of a target object;
inputting the target face image into a target face recognition model for face recognition to obtain the character identity information of the target object, wherein the target face recognition model is obtained according to the model updating method of any one of claims 1 to 4.
6. The face recognition method according to claim 5, wherein the target face recognition model includes a feature extraction network and a recognition network, the inputting the target face image into the target face recognition model for face recognition, to obtain the person identity information of the target object, includes:
Performing feature extraction on the target face image based on the feature extraction network to obtain a face three-dimensional visual feature;
and carrying out identity recognition on the three-dimensional visual characteristics of the human face based on the recognition network to obtain the character identity information of the target object.
7. A model updating apparatus, characterized in that the model updating apparatus comprises:
the data acquisition module is used for acquiring sampled face image data;
the data dividing module is used for carrying out data division on the sample face image data to obtain training face image data and test face image data;
the model acquisition module is used for acquiring an original face recognition model, wherein the original face recognition model comprises a feature extraction network and a recognition network;
a network element data acquisition module, configured to acquire first location distribution data and a first number of units of an original hidden layer unit of the identification network;
the newly added data determining module is used for determining a second unit number and second position distribution data of the newly added hidden layer unit based on the first position distribution data and the first unit number;
the model training module is used for carrying out model training on the original face recognition model based on the training face image data, the second unit number of the newly added hidden layer units and the second position distribution data to obtain an initial face recognition model;
And the model optimization module is used for carrying out model optimization on the initial face recognition model based on the test face image data to obtain a target face recognition model, and the target face recognition model is used for carrying out recognition processing on a target face image of a target object to obtain character identity information of the target object.
8. A face recognition device, the device comprising:
the image acquisition module is used for acquiring a target face image of a target object;
the face recognition module is used for inputting the target face image into a target face recognition model for face recognition to obtain the character identity information of the target object, wherein the target face recognition model is obtained according to the model updating device of claim 7.
9. An electronic device comprising a memory and a processor, the memory storing a computer program, the processor implementing when executing the computer program:
the model updating method according to any one of claims 1 to 4;
or alternatively, the process may be performed,
a face recognition method according to any one of claims 5 to 6.
10. A computer readable storage medium storing a computer program, the computer program being implemented when executed by a processor:
The model updating method according to any one of claims 1 to 4;
or alternatively, the process may be performed,
a face recognition method according to any one of claims 5 to 6.
CN202310638481.8A 2023-05-31 2023-05-31 Model updating method and device, face recognition method and device and storage medium Pending CN116597495A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310638481.8A CN116597495A (en) 2023-05-31 2023-05-31 Model updating method and device, face recognition method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310638481.8A CN116597495A (en) 2023-05-31 2023-05-31 Model updating method and device, face recognition method and device and storage medium

Publications (1)

Publication Number Publication Date
CN116597495A true CN116597495A (en) 2023-08-15

Family

ID=87607983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310638481.8A Pending CN116597495A (en) 2023-05-31 2023-05-31 Model updating method and device, face recognition method and device and storage medium

Country Status (1)

Country Link
CN (1) CN116597495A (en)

Similar Documents

Publication Publication Date Title
Gosselin et al. Revisiting the fisher vector for fine-grained classification
WO2020078119A1 (en) Method, device and system for simulating user wearing clothing and accessories
CN108197592B (en) Information acquisition method and device
CN107679513B (en) Image processing method and device and server
CN112395979B (en) Image-based health state identification method, device, equipment and storage medium
CN114241459B (en) Driver identity verification method and device, computer equipment and storage medium
CN115239675A (en) Training method of classification model, image classification method and device, equipment and medium
JP2023541752A (en) Neural network model training methods, image retrieval methods, equipment and media
CN112188306A (en) Label generation method, device, equipment and storage medium
CN111191503A (en) Pedestrian attribute identification method and device, storage medium and terminal
CN114282059A (en) Video retrieval method, device, equipment and storage medium
CN112529149A (en) Data processing method and related device
CN115222583A (en) Model training method and device, image processing method, electronic device and medium
CN114998583A (en) Image processing method, image processing apparatus, device, and storage medium
CN111445545B (en) Text transfer mapping method and device, storage medium and electronic equipment
CN116721454A (en) Micro-expression recognition method and device, electronic equipment and storage medium
Ashrafi et al. Development of image dataset using hand gesture recognition system for progression of sign language translator
CN112560848B (en) Training method and device for POI (Point of interest) pre-training model and electronic equipment
CN116597495A (en) Model updating method and device, face recognition method and device and storage medium
CN114186039A (en) Visual question answering method and device and electronic equipment
Liao et al. Video Face Detection Technology and Its Application in Health Information Management System
CN114529785B (en) Model training method, video generating method and device, equipment and medium
CN114241558B (en) Model training method, video generating method and device, equipment and medium
CN114913104B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN114937189A (en) Model training method and device, target recognition method, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination