CN117115881A - Face recognition system based on machine learning - Google Patents
Face recognition system based on machine learning Download PDFInfo
- Publication number
- CN117115881A CN117115881A CN202310942847.0A CN202310942847A CN117115881A CN 117115881 A CN117115881 A CN 117115881A CN 202310942847 A CN202310942847 A CN 202310942847A CN 117115881 A CN117115881 A CN 117115881A
- Authority
- CN
- China
- Prior art keywords
- face
- module
- image
- feature
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010801 machine learning Methods 0.000 title claims abstract description 31
- 230000004927 fusion Effects 0.000 claims abstract description 54
- 238000001514 detection method Methods 0.000 claims abstract description 32
- 238000012549 training Methods 0.000 claims abstract description 29
- 238000000605 extraction Methods 0.000 claims abstract description 27
- 230000008260 defense mechanism Effects 0.000 claims abstract description 10
- 238000005457 optimization Methods 0.000 claims abstract description 10
- 238000013135 deep learning Methods 0.000 claims abstract description 7
- 238000000034 method Methods 0.000 claims description 64
- 239000013598 vector Substances 0.000 claims description 43
- 238000013136 deep learning model Methods 0.000 claims description 21
- 230000009466 transformation Effects 0.000 claims description 21
- 230000004913 activation Effects 0.000 claims description 18
- 238000005516 engineering process Methods 0.000 claims description 17
- 230000008569 process Effects 0.000 claims description 17
- 230000002776 aggregation Effects 0.000 claims description 16
- 238000004220 aggregation Methods 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 16
- 238000004458 analytical method Methods 0.000 claims description 14
- 238000006243 chemical reaction Methods 0.000 claims description 13
- 230000007246 mechanism Effects 0.000 claims description 12
- 238000007781 pre-processing Methods 0.000 claims description 12
- 238000010586 diagram Methods 0.000 claims description 11
- 238000013527 convolutional neural network Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 10
- 238000012935 Averaging Methods 0.000 claims description 9
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 7
- 230000005540 biological transmission Effects 0.000 claims description 5
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 230000004931 aggregating effect Effects 0.000 claims description 3
- 210000001331 nose Anatomy 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 230000007123 defense Effects 0.000 abstract description 4
- 230000008901 benefit Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 230000008485 antagonism Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/602—Providing cryptographic facilities or services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/098—Distributed learning, e.g. federated learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/70—Multimodal biometrics, e.g. combining information from different biometric modalities
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Security & Cryptography (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Computer Hardware Design (AREA)
- Bioethics (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention relates to the technical field of face recognition systems, in particular to a face recognition system based on machine learning, which consists of a data acquisition module, a face detection and positioning module, a feature extraction module, a feature matching and recognition module, a security and privacy protection module, a system management and optimization module, a multi-modal data fusion module, a model interpretability module and a federal learning module. In the invention, the advanced deep learning algorithm is applied to face detection, feature extraction and matching, so that the recognition accuracy and robustness of the system are improved, the challenge training and defense mechanism is introduced to enhance the challenge defense capability of the system, the system is more difficult to be influenced by deceptive samples and attacks, the multi-modal data fusion module combines audio and gesture information, the accuracy and robustness of the system are improved, and the confidentiality and the integrity of data are ensured by the enhanced encryption and authority control in terms of data privacy and safety protection.
Description
Technical Field
The invention relates to the technical field of face recognition systems, in particular to a face recognition system based on machine learning.
Background
Face recognition systems are a technique for recognizing and verifying faces using computer technology. The method can capture the face image through a camera or an image input device, and analyze and compare the face in the image by using an algorithm. Face recognition systems typically include the steps of first detecting the position of a face in an image and then extracting feature points or feature vectors of the face to describe and represent the features of the face. Next, the system compares the extracted face features with face templates stored in advance in a database, thereby judging whether the same person is present. The face recognition system is widely applied to the directions of face unlocking mobile phones, face payment, safety access control and the like.
In the actual use process of the face recognition system, the accuracy of the existing system in the aspects of face detection, feature extraction and matching is relatively low, and the existing system is easily influenced by factors such as illumination, angles and shielding, so that the false recognition rate is high. Secondly, the existing system is fragile in the aspect of defending against attacks, is easily influenced by deceptive samples and attack means, and is easy to deceptive. In addition, existing systems typically rely on only a single face image for recognition, ignoring other available multimodal information, such as audio and pose information, resulting in insufficient robustness and accuracy of the system. In addition, the existing system has a certain risk in terms of data privacy and security protection, and the lack of a powerful encryption and permission control mechanism can lead to personal information disclosure and abuse of users. Finally, existing systems may have certain limitations in terms of performance and efficiency, and may be limited in processing large-scale data and real-time applications.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides a face recognition system based on machine learning.
In order to achieve the above purpose, the present invention adopts the following technical scheme: the face recognition system based on machine learning consists of a data acquisition module, a face detection and positioning module, a feature extraction module, a feature matching and recognition module, a security and privacy protection module, a system management and optimization module, a multi-mode data fusion module, a model interpretability module and a federal learning module;
the functional items of the data acquisition module comprise image input and image preprocessing;
the functional items of the face detection and positioning module comprise face detection and face alignment;
the functional item of the feature extraction module is specifically depth feature extraction;
the functional items of the feature matching and identifying module comprise feature comparison and identifying decisions;
the functional items of the security and privacy protection module comprise data encryption and storage and authority control;
the functional items of the system management and optimization module comprise unsupervised learning and countermeasure training;
the function item of the multi-mode data fusion module is specifically multi-mode feature fusion;
The functional items of the model interpretability module are specifically interpretability analysis;
the function item of the federal learning module is specifically federal learning.
As a further scheme of the invention, the image input uses image acquisition equipment comprising a camera and a video camera to acquire face image data in real time;
the image preprocessing comprises image format conversion, denoising and brightness contrast adjustment;
the image format conversion is specifically to convert the collected image data into a format suitable for processing by a specific algorithm by using an image processing algorithm, wherein the format comprises an RGB format and a gray scale format;
the denoising is specifically to eliminate noise and artifacts in an image by adopting a noise removal algorithm based on deep learning;
the brightness and contrast adjustment is specifically performed by using a histogram equalization and contrast enhancement algorithm, and the visibility of the image is enhanced by performing self-adaptive adjustment according to the brightness and contrast of the image.
As a further scheme of the invention, the face detection adopts Haar cascade connection and convolution neural network;
the Haar cascade describes various attributes of a human face by utilizing a characteristic template, carries out sliding window detection on an image, and carries out classification judgment by utilizing a cascade classifier;
The convolutional neural network performs face detection by training a neural network with multi-layer convolution and pooling operations;
the face alignment comprises key point positioning and geometric transformation;
the key point positioning is carried out on the face image by using a deep learning model, particularly a face key point detection network, wherein the key points comprise face feature points of eyes, noses and mouths;
the geometric transformation is specifically that based on the position information of the key points, geometric transformation including affine transformation and perspective transformation is carried out, so that face images are aligned and kept at consistent scales, angles and positions, scale and angle differences in different images are eliminated, and consistent face features are extracted.
As a further scheme of the invention, the depth feature extraction adopts an Arcface algorithm and a SphereF algorithm;
the ArcFace algorithm generates a face feature vector with discrimination by introducing an angle cosine loss function and a network structure, and realizes face feature extraction by maximizing the distance between feature vectors of the same person and minimizing the distance between different persons;
the SphereFace algorithm generates a robust face feature representation by performing spherical mapping on feature vectors in a feature space, and realizes face feature discrimination by maximizing the angular interval between feature vectors of the same person and minimizing the angular interval between different persons.
As a further scheme of the invention, the characteristic comparison algorithm comprises Euclidean distance and cosine similarity;
the Euclidean distance is used for measuring the difference or the similarity between the two face feature vectors by calculating the Euclidean distance between the two face feature vectors;
the cosine similarity is based on an included angle between two face feature vectors to measure the similarity between the two face feature vectors, and the value range of the cosine similarity is between-1 and 1;
the algorithm for identifying the decision adopts threshold setting and decision rules;
the recognition decision judges whether the two feature vectors are matched or similar by setting a threshold value;
and the decision rule is set according to the result of feature comparison and a threshold value, binary judgment is executed, if the decision rule is larger than the threshold value, matching is carried out, and otherwise, matching is not carried out.
As a further scheme of the invention, the data encryption specifically adopts a symmetric encryption algorithm to carry out AES encryption on the stored face image and data, so as to ensure confidentiality of the data in the storage and transmission processes;
the authority control comprises an identity authentication mechanism and an access authority strategy;
the authentication mechanism ensures that only authorized users can access the system and related data by requiring the users to provide authentication information;
The access right policy is set to be a hierarchical access right policy based on the user roles and the right level.
As a further scheme of the invention, the unsupervised learning adopts a self-supervision learning method, a self-supervision task and a learning target are set, positive and negative pairs of the face image are generated in a mode of image rotation, clipping or color conversion, and the key features of the face image are learned by a deep learning model through the learning target, so that the performance of the system on unlabeled data is improved;
the challenge training includes generating a challenge sample, a defensive mechanism;
the generated countermeasure sample is an input sample with misleading property generated by a countermeasure generation network, and the deep learning model can be used for identifying and countering attacks aiming at a face recognition system by generating the countermeasure sample, wherein the attacks comprise face camouflage and image tampering;
the defense mechanism is specifically input space random disturbance, and the input space random disturbance increases the difficulty of attack resistance by carrying out random disturbance or interference on an input image.
As a further scheme of the invention, the multi-modal feature fusion comprises a multi-modal fusion network, a weighted average fusion and a decision-level fusion;
The multi-modal fusion network is specifically a multi-modal fusion convolutional neural network, the multi-modal fusion convolutional neural network adopts a plurality of branches, each branch processes the characteristics of one mode, and a plurality of mode information are integrated through network level fusion;
the weighted average fusion is used for setting weights for the features of different modes, and fusion is carried out in a weighted average mode;
after the multi-mode features are extracted through the decision-level fusion, the features of each mode are classified by using a plurality of single-mode classifiers, and the results of all the single-mode classifiers are synthesized through voting, weighted voting and integrated learning modes, so that a final fusion decision result is obtained.
As a further scheme of the invention, the interpretability analysis adopts a gradient importance method, a class activation graph technology and an interpretative deep learning model;
the gradient importance method specifically comprises the steps of using a gradient importance method comprising gradient x input to explain a decision process and an interested region of a model for inputting a face image;
the class activation map technology specifically adopts a Grad-CAM class activation map method, generates an activation map by calculating gradient weights on an input feature map to display a region concerned when a model identifies a human face, and understand the dependency degree of the model on important features and regions of the human face;
The interpreted deep learning model interprets the prediction results of the model by modeling the training process of the model and provides contribution analysis to the features.
As a further aspect of the invention, the federal learning adopts Federated Averaging algorithm and Secure Aggregation technology;
the Federated Averaging algorithm is used for carrying out local model training on distributed equipment, and aggregating model parameters to a central server for updating a global model;
the Secure Aggregation technology uses secure multiparty computation, and before the aggregation of model parameters is performed by the participants, the model parameters are firstly encrypted securely, computation is performed by a secure protocol, including homomorphic encryption and secret sharing, so as to obtain the aggregated model parameters without exposing individual data.
Compared with the prior art, the invention has the advantages and positive effects that:
in the invention, the advanced deep learning algorithm is applied to face detection, feature extraction and matching, so that the recognition accuracy and robustness of the system are improved. Introducing challenge training and defense mechanisms enhances the system's challenge defense capabilities, making the system more vulnerable to fraudulent samples and attacks. The multi-mode data fusion module combines the information such as audio frequency, gesture and the like, and improves the accuracy and the robustness of the system. In terms of data privacy and security protection, enhanced encryption and rights control ensures confidentiality and integrity of data. The performance and efficiency are further improved by optimizing each module of the system and adopting an unsupervised learning method. In conclusion, the improved face recognition scheme has obvious advantages in the aspects of accuracy, attack defense resistance, multi-mode information fusion, data privacy protection and system performance, and the reliability and the practicability of the face recognition system are obviously improved.
Drawings
Fig. 1 is a schematic diagram of a main system framework of a face recognition system based on machine learning according to the present invention;
fig. 2 is a schematic diagram of a data acquisition module framework of a face recognition system based on machine learning according to the present invention;
FIG. 3 is a schematic diagram of a face detection and positioning module framework of a face recognition system based on machine learning according to the present invention;
fig. 4 is a schematic diagram of a feature extraction module framework of a face recognition system based on machine learning according to the present invention;
fig. 5 is a schematic diagram of a feature matching and recognition module framework of a face recognition system based on machine learning according to the present invention;
fig. 6 is a schematic diagram of a security and privacy protection module framework of a face recognition system based on machine learning according to the present invention;
FIG. 7 is a schematic diagram of a system management and optimization module framework of a face recognition system based on machine learning according to the present invention;
fig. 8 is a schematic diagram of a multi-modal data fusion module framework of a face recognition system based on machine learning according to the present invention;
fig. 9 is a schematic diagram of a model interpretability module framework of a face recognition system based on machine learning according to the present invention;
fig. 10 is a schematic diagram of a federal learning module framework of a machine learning-based face recognition system according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the description of the present invention, it should be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientation or positional relationships shown in the drawings, merely to facilitate describing the present invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present invention. Furthermore, in the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Example 1
Referring to fig. 1, the present invention provides a technical solution: the face recognition system based on machine learning consists of a data acquisition module, a face detection and positioning module, a feature extraction module, a feature matching and recognition module, a security and privacy protection module, a system management and optimization module, a multi-mode data fusion module, a model interpretability module and a federal learning module;
The functional items of the data acquisition module comprise image input and image preprocessing;
the functional items of the face detection and positioning module comprise face detection and face alignment;
the functional items of the feature extraction module are specifically depth feature extraction;
the functional items of the feature matching and identifying module comprise feature comparison and identifying decisions;
the functional items of the security and privacy protection module comprise data encryption and storage and authority control;
the functional items of the system management and optimization module comprise unsupervised learning and countermeasure training;
the function item of the multi-mode data fusion module is specifically multi-mode feature fusion;
the functional items of the model interpretability module are specifically interpretability analysis;
the function item of the federal learning module is specifically federal learning.
Through the image input and preprocessing functions of the data acquisition module, the system can receive high-quality images and perform preprocessing to improve the accuracy of the follow-up modules. The face detection and positioning module is responsible for accurately detecting and positioning the face region and providing accurate input for the subsequent feature extraction and matching module. The feature extraction module extracts high-dimensional features of the face by using a deep learning method, and enhances the discrimination capability and generalization capability. The feature matching and recognition module performs a face recognition task through feature comparison and recognition decision.
The security and privacy protection module adopts means such as data encryption, storage, authority control and the like to protect the security of private data. The system management and optimization module promotes system automation and robustness through unsupervised learning and countermeasure training. The multi-mode data fusion module fuses the data of different sensors and information sources, and improves the identification accuracy and robustness. The model interpretability module helps understand and interpret the model decision process, improving the understandability and credibility of the system. The federal learning module implements model updating and privacy protection by training and aggregating model parameters on distributed devices.
In a comprehensive view, the integration of the modules can improve the effects of the machine learning-based face recognition system in terms of accuracy, robustness, privacy protection, interpretability and the like. The comprehensive system can be applied to various practical scenes to provide a reliable and efficient solution for face recognition
Referring to fig. 2, image input uses an image acquisition device including a camera and a video camera to acquire face image data in real time;
the image preprocessing comprises image format conversion, denoising and brightness contrast adjustment;
the image format conversion is specifically to convert the collected image data into a format suitable for processing by a specific algorithm by using an image processing algorithm, wherein the format comprises an RGB format and a gray scale format;
Denoising specifically, removing noise and artifacts in an image by adopting a noise removal algorithm based on deep learning;
the brightness and contrast adjustment is specifically to perform self-adaptive adjustment according to brightness and contrast of the image by using a histogram equalization and contrast enhancement algorithm, so as to enhance the visibility of the image.
The image input utilizes equipment such as a camera, a video camera and the like to acquire face image data in real time, so that the system is guaranteed to have the capability of processing real-time scenes in real time. The image preprocessing enhances the system performance through the functions of format conversion, denoising, brightness contrast adjustment and the like.
The function of image preprocessing has several benefits for system implementation. Format conversion ensures that the image data is adapted to the processing requirements of a specific algorithm, improving overall performance. The denoising technology can effectively eliminate noise and artifacts in the image, improve the image quality and facilitate the improvement of the accuracy and stability of the subsequent modules. The brightness and contrast adjustment can enhance the visibility of the image, make details clearer, and help the face recognition system to sense and extract face features.
In summary, the image input and image preprocessing functions play an important role in machine learning-based face recognition systems. By acquiring the face image data in real time and preprocessing the face image data, the system can have the capabilities of instantaneity and improving the image quality and the visibility, so that a better foundation is provided for the work of a subsequent module. This helps to improve accuracy, robustness and performance of the system, providing a reliable solution for the face recognition field.
Referring to fig. 3, face detection employs Haar cascading in combination with convolutional neural networks;
the Haar cascade describes various attributes of the face by utilizing a characteristic template, carries out sliding window detection on the image, and carries out classification judgment by utilizing a cascade classifier;
the convolutional neural network carries out face detection by training the neural network with multi-layer convolution and pooling operation;
the face alignment comprises key point positioning and geometric transformation;
the key point positioning is carried out on the face image by using a deep learning model, particularly a face key point detection network, wherein the key points comprise face characteristic points of eyes, nose and mouth;
the geometric transformation is specifically that based on the position information of the key points, geometric transformation including affine transformation and perspective transformation is carried out, so that face images are aligned and kept consistent in scale, angle and position, scale and angle differences in different images are eliminated, and consistent face features are extracted. The face detection adopts Haar cascade connection and convolution neural network, so that the face region can be rapidly and accurately detected from the image, and accurate input is provided for subsequent feature extraction and recognition. The face alignment comprises key point positioning and geometric transformation, wherein the key point positioning utilizes a deep learning model to accurately position key points in the face image, and the geometric transformation is adjusted according to the position information of the key points, so that the face image is aligned and consistent in scale, angle and position.
The implementation of these modules brings about several advantages. Firstly, the face detection can efficiently detect the face region, and the real-time performance and the response capability of the system are improved. Secondly, through key point positioning, the system can accurately understand the face structure and the position, and provides a basis for subsequent alignment operation. Finally, the geometric transformation can realize the alignment and consistency of the face images, eliminate the difference of the scale and the angle and extract the consistent face characteristics.
In summary, the face detection and face alignment module plays an important role in the implementation of face recognition systems. The method improves the accuracy, the robustness and the universality of the system on different face images through rapid and accurate face detection, accurate key point positioning and geometric transformation. This helps to build a reliable and efficient face recognition solution, suitable for a variety of practical application scenarios.
Referring to FIG. 4, depth feature extraction employs an Arcface algorithm and a SphereF algorithm;
the ArcFace algorithm generates a face feature vector with discrimination by introducing an angle cosine loss function and a network structure, and realizes face feature extraction by maximizing the distance between feature vectors of the same person and minimizing the distance between different persons;
The sphere mapping is carried out on the feature vectors in the feature space by the sphere mapping algorithm, so that the face feature representation with robustness is generated, and the face feature discrimination is realized by maximizing the angle interval between the feature vectors of the same person and minimizing the angle interval between different persons.
The ArcFace algorithm generates a face feature vector with discrimination by introducing an angle cosine loss function and a network structure, maximizes the distance between feature vectors of the same person, and minimizes the distance between different persons so as to realize face feature extraction. The sphere mapping is carried out on the feature vectors by the sphere mapping algorithm, so that the face feature representation with robustness is generated, the angle interval between the feature vectors of the same person is maximized, and the angle interval between different persons is minimized, so that the face feature discrimination is realized.
Firstly, the ArcFace algorithm can generate face feature vectors with high separability and robustness, and accuracy and reliability of face recognition are improved. And secondly, the sphering algorithm generates robust face feature representation through spherical mapping, and has good adaptability to faces under different angles and illumination conditions. The implementation of the algorithms can also improve the degree of distinguishing facial features, so that the feature representations among different people are more discernable.
In conclusion, arcFace and spheeface algorithms have important implementation effects in machine learning-based face recognition systems. The accuracy, the robustness and the reliability of the recognition system are obviously improved by generating the face feature vector with the discrimination and the robustness. The algorithms provide a powerful feature extraction method for the face recognition system, and lay a solid foundation for subsequent face matching and recognition tasks.
Referring to fig. 5, the feature comparison algorithm includes euclidean distance and cosine similarity;
the Euclidean distance is used for measuring the difference or similarity between the two face feature vectors by calculating the Euclidean distance between the two face feature vectors;
the cosine similarity is based on the included angle between the two face feature vectors, the similarity between the two face feature vectors is measured, and the value range of the cosine similarity is between-1 and 1;
the algorithm for identifying the decision adopts threshold setting and decision rules;
the recognition decision judges whether the two feature vectors are matched or similar by setting a threshold value;
and the decision rule is set according to the result of feature comparison and a threshold value, binary judgment is executed, if the decision rule is larger than the threshold value, matching is carried out, and otherwise, matching is not carried out.
Euclidean distance measures the difference or similarity between two face feature vectors by calculating the distance between the feature vectors, while cosine similarity measures the similarity between the feature vectors based on the included angle. The recognition decision algorithm determines the result of the feature comparison by setting a threshold value, thereby executing a recognition decision.
The Euclidean distance algorithm has the characteristics of high intuitiveness, simplicity and rapidness, and is suitable for a real-time identification system. The cosine similarity algorithm has the advantages of strong robustness and range determination, and can process feature vectors with different lengths and perform similarity determination according to a threshold value. The recognition decision algorithm has high flexibility and strong adjustability, and can adjust the threshold value according to actual requirements so as to balance the misjudgment rate and the missed recognition rate.
In summary, euclidean distance and cosine similarity are taken as feature comparison algorithms, and recognition decision algorithms play an important role in the implementation of face recognition systems. The method improves the accuracy, the robustness and the flexibility of the system by measuring the difference or the similarity between the feature vectors and carrying out recognition decision according to the threshold value. The algorithm provides an effective feature comparison and decision method for the face recognition system and provides support for high-efficiency and accurate face recognition in practical application.
Referring to fig. 6, the data encryption specifically adopts a symmetric encryption algorithm to perform AES encryption on the stored face image and data, so as to ensure confidentiality of the data in the storage and transmission processes;
the authority control comprises an identity authentication mechanism and an access authority strategy;
an authentication mechanism to ensure that only authorized users can access the system and related data by requiring the user to provide authentication information;
and setting a hierarchical access right policy based on the user roles and the right levels.
The data encryption adopts a symmetric encryption algorithm, such as AES, to encrypt the stored face image and the data so as to ensure confidentiality of the data in the storage and transmission processes. By the aid of the method, data security can be guaranteed, unauthorized visitors are prevented from acquiring sensitive information, and regulation and privacy protection requirements are met.
The rights control includes an authentication mechanism and an access rights policy. The authentication mechanism requires the user to provide authentication information, such as a user name and password or biometric characteristics, to ensure that only authorized users have access to the system and related data. This may control access by unauthorized users and provide user tracking and auditing functions. The access rights policy sets hierarchical access rights based on user roles and rights levels, limiting the range of data that a user can access and operate. The sensitive data can be protected, the risk of data leakage is reduced, and flexibility and expandability are provided so as to meet the requirements of different users and scenes.
In summary, through implementation of data encryption and rights control, the face recognition system can ensure confidentiality and integrity of data, and control that only authorized users can access sensitive data. Thus, the privacy of the user can be protected, and the risks of data disclosure and unauthorized access are prevented. At the same time, the measures meet the requirements of regulations and provide compliance assurance. For system implementation, data encryption and authority control provide important practical effects, and ensure the safety, compliance and user privacy protection of the system.
Referring to fig. 7, the unsupervised learning adopts a self-supervised learning method, sets a self-supervised task and a learning target, generates positive and negative pairs of face images through image rotation, clipping or color conversion, and enables the deep learning model to learn key features of the face images through the learning target, so that performance of the system on unlabeled data is improved;
antagonism training includes generating antagonism samples, defense mechanisms;
the generation of the countermeasure sample is an input sample with misleading property generated by a countermeasure generation network, and the deep learning model can identify and resist attacks aiming at a face recognition system by generating the countermeasure sample, wherein the attacks comprise face camouflage and image tampering;
The defense mechanism is specifically input space random disturbance, and the input space random disturbance increases the difficulty of attack resistance by carrying out random disturbance or interference on an input image.
Referring to fig. 8, the multi-modal feature fusion includes multi-modal fusion network, weighted average fusion, and decision level fusion;
the multi-mode fusion network is specifically a multi-mode fusion convolutional neural network, the multi-mode fusion convolutional neural network adopts a plurality of branches, each branch processes the characteristics of one mode, and the information of a plurality of modes is integrated through the fusion of network layers;
the weighted average fusion is to set weights for the features of different modes, and fusion is carried out in a weighted average mode;
after the multi-mode feature extraction is carried out in the decision-level fusion, the features of each mode are classified by using a plurality of single-mode classifiers, and the results of all the single-mode classifiers are synthesized through voting, weighted voting and integrated learning modes, so that a final fusion decision result is obtained.
The non-supervision learning adopts a self-supervision learning method, and by setting a self-supervision task and a learning target, the rotation, clipping or color conversion is carried out by utilizing the positive and negative pairs of the face image so as to improve the performance of the system on non-label data. The method enables the system to learn key features from a large amount of unlabeled data, and enhances the accuracy and robustness of face recognition.
Challenge training then involves generating challenge samples and defense mechanisms. Generating the challenge sample generates an input sample with misleading property through a challenge generation network, so that the deep learning model can distinguish and resist attacks faced by a face recognition system, such as face camouflage, image tampering and the like. Such training improves the recognition and defense of the system against potential attacks.
The defense mechanism increases the difficulty of combating attacks by performing random perturbations, such as interference or random transformations, on the image in the input space. The method improves the difficulty of constructing a targeted attack sample by an attacker, and enhances the security of the system.
The benefits of performing unsupervised learning and countermeasure training are manifold. Unsupervised learning utilizes unlabeled data to improve system performance and enhance feature learning capabilities. Challenge training enables the system to cope with attacks and increase robustness by generating challenge samples and defense mechanisms. In combination, the methods provide a powerful capability for the face recognition system, so that the face recognition system has higher performance and safety in practical application.
Referring to FIG. 9, the interpretive analysis uses gradient importance methods, class activation graph techniques, and an interpretive deep learning model;
The gradient importance method comprises the steps of using a gradient importance method comprising gradient x input to explain a decision process and an interested region of a model for inputting a face image;
the class activation map technology specifically adopts a Grad-CAM class activation map method, generates an activation map by calculating gradient weights on an input feature map to display a region concerned when a model identifies a human face, and understand the dependency degree of the model on important features and regions of the human face;
the interpreted deep learning model interprets the prediction results of the model by modeling the training process of the model and provides a contribution analysis to the features.
The interpretability analysis method comprises a gradient importance method, a class activation map technology and a deep learning model with interpretation. The gradient importance method is used for explaining the decision process and the interested region of the model on the input face image through a gradient X input method, and revealing the importance of the model on different characteristics. The class activation map technology adopts a Grad-CAM method, and generates an activation map by calculating gradient weights on an input feature map so as to display a region of interest in the face recognition by the model and increase the interpretation of model prediction. The interpreted deep learning model interprets the prediction results of the model by modeling the training process of the model and provides a contribution analysis to the features. From an implementation point of view, these interpretability analysis methods are beneficial to face recognition systems:
Decision interpretation: an explanation of the model decisions is provided to help understand how the model identifies and classifies facial images.
Region of interest positioning: by revealing the areas of interest to the model, the interpretation and understanding of specific areas and features by the face recognition system can be increased.
Visual interpretation: the gradient importance method and class activation map technique provide visual interpretation, making the working process of the face recognition system more transparent and interpretable.
The reliability and the reliability of the signaling are improved: the interpretability analysis method enhances the credibility and the credibility of the face recognition system, so that a user can better know and accept the decision basis of the system.
In summary, implementation of the interpretability analysis in face recognition systems is crucial for the improvement of system performance and trust of users. Gradient importance methods, class activation graph techniques, and interpreted deep learning models provide interpretability and interpretability for face recognition systems, helping to understand the process of model decisions, regions of interest, and predictive results in depth. By enhancing the interpretability of the system, the face recognition system can better meet the actual demands and improve the user satisfaction and the reliability of the system.
Referring to fig. 10, federal learning employs Federated Averaging algorithm, secure Aggregation technique;
federated Averaging algorithm, which is to aggregate model parameters to a central server for updating a global model by carrying out local model training on distributed equipment;
secure Aggregation techniques use secure multiparty computation, where model parameters are securely encrypted before aggregation of model parameters by participants, and computed via a secure protocol, including homomorphic encryption, secret sharing, to obtain aggregated model parameters without exposing individual data.
The Federated Averaging algorithm aggregates model parameters on individual devices to a central server for global model updates by local model training on distributed devices. This approach brings the following benefits: first, it protects individual privacy because individual data does not need to be transmitted from the device to the central server; secondly, the distributed computation reduces the computation burden of the central server and improves the expansibility and efficiency of the system; finally, updating and fusing the global model improves the overall performance and accuracy of the model.
Secure Aggregation techniques use a secure multiparty computing approach to secure encryption of model parameters and computation via a secure protocol before aggregation of the model parameters by the participants, ensuring that individual data is not exposed. This technique brings the following beneficial effects: firstly, the data privacy is protected, and the individual data of the participants cannot be revealed; secondly, the safe computing protocol ensures the high efficiency and the credibility of aggregation; finally, secure Aggregation technology is suitable for federal learning setting with multi-party participation, and flexible and safe data merging is realized.
In summary, implementing federal learning using Federated Averaging algorithm and Secure Aggregation techniques can achieve important practical effects in protecting individual privacy and implementing collaborative learning. The method protects individual privacy, improves the efficiency and performance of the system, and supports trusted model parameter aggregation. Federal learning is applied to a collaborative learning scenario, and a feasible solution is provided for the parties to jointly promote model performance.
Working principle: in face recognition systems, data acquisition modules are an important component. First, face image data is acquired in real time from the real world using an image acquisition device such as a camera or a video camera. To ensure the quality and usability of subsequent processing, the image is preprocessed, including format conversion, denoising, brightness contrast adjustment, and the like.
The face detection and localization module follows. By utilizing a Haar cascade connection and convolution neural network method and the like, the system can accurately detect the human face in the image. And simultaneously, aligning the detected face images by using a key point positioning and geometric transformation technology, so that the detected face images have consistent dimensions, angles and positions.
The feature extraction module is responsible for extracting feature vectors of the face image. By adopting a deep learning method, such as an Arcface algorithm, a SphereF algorithm and the like, the system can extract the depth characteristics with discrimination.
The feature matching and identifying module is responsible for comparing feature vectors and measuring similarity. By adopting the algorithm such as Euclidean distance, cosine similarity and the like, the system can compare the similarity between different feature vectors. By setting a threshold value and executing a decision rule, the system can judge whether the feature vectors are matched or similar, and the face recognition function is realized.
To ensure security and privacy, the system further includes a data encryption and storage module. And (3) encrypting the stored face image and data by adopting a symmetrical encryption algorithm so as to ensure confidentiality in the storage and transmission processes. The rights control module includes an authentication mechanism and access rights policy, and only authorized users can access the system and related data.
In order to improve the system performance and robustness, the system management and optimization module adopts an unsupervised learning and countermeasure training method. And model training is performed on the label-free data through self-supervision learning, so that the system performance is improved. Challenge training is to combat attacks of face recognition systems, such as face masquerading and image tampering, by generating challenge samples and defense mechanisms.
The multi-mode data fusion module fuses the characteristics of a plurality of modes so as to improve the accuracy and the robustness of face recognition. Features from different modes are fused through a multi-mode fusion network, a weighted average, a decision-level fusion and other methods.
The model interpretability module is used for interpreting the prediction result of the model and the face recognition process. By adopting a gradient importance method, a class activation map technology and an explanatory deep learning model, the system can explain the prediction result of the model and analyze the characteristic contribution degree.
Finally, the system also supports a federal learning module. Local model training is performed on distributed devices by using Federated Averaging algorithm and Secure Aggregation technology, and model parameters are aggregated to a central server for global model updating on the premise of protecting individual privacy.
The present invention is not limited to the above embodiments, and any equivalent embodiments which can be changed or modified by the technical disclosure described above can be applied to other fields, but any simple modification, equivalent changes and modification made to the above embodiments according to the technical matter of the present invention will still fall within the scope of the technical disclosure.
Claims (10)
1. The face recognition system based on machine learning is characterized in that: the face recognition system based on machine learning consists of a data acquisition module, a face detection and positioning module, a feature extraction module, a feature matching and recognition module, a security and privacy protection module, a system management and optimization module, a multi-mode data fusion module, a model interpretability module and a federal learning module;
The functional items of the data acquisition module comprise image input and image preprocessing;
the functional items of the face detection and positioning module comprise face detection and face alignment;
the functional item of the feature extraction module is specifically depth feature extraction;
the functional items of the feature matching and identifying module comprise feature comparison and identifying decisions;
the functional items of the security and privacy protection module comprise data encryption and storage and authority control;
the functional items of the system management and optimization module comprise unsupervised learning and countermeasure training;
the function item of the multi-mode data fusion module is specifically multi-mode feature fusion;
the functional items of the model interpretability module are specifically interpretability analysis;
the function item of the federal learning module is specifically federal learning.
2. The machine learning based face recognition system of claim 1, wherein: the image input uses image acquisition equipment comprising a camera and a video camera to acquire face image data in real time;
the image preprocessing comprises image format conversion, denoising and brightness contrast adjustment;
the image format conversion is specifically to convert the collected image data into a format suitable for processing by a specific algorithm by using an image processing algorithm, wherein the format comprises an RGB format and a gray scale format;
The denoising is specifically to eliminate noise and artifacts in an image by adopting a noise removal algorithm based on deep learning;
the brightness and contrast adjustment is specifically performed by using a histogram equalization and contrast enhancement algorithm, and the visibility of the image is enhanced by performing self-adaptive adjustment according to the brightness and contrast of the image.
3. The machine learning based face recognition system of claim 1, wherein: the face detection adopts Haar cascade connection and convolution neural network;
the Haar cascade describes various attributes of a human face by utilizing a characteristic template, carries out sliding window detection on an image, and carries out classification judgment by utilizing a cascade classifier;
the convolutional neural network performs face detection by training a neural network with multi-layer convolution and pooling operations;
the face alignment comprises key point positioning and geometric transformation;
the key point positioning is carried out on the face image by using a deep learning model, particularly a face key point detection network, wherein the key points comprise face feature points of eyes, noses and mouths;
the geometric transformation is specifically that based on the position information of the key points, geometric transformation including affine transformation and perspective transformation is carried out, so that face images are aligned and kept at consistent scales, angles and positions, scale and angle differences in different images are eliminated, and consistent face features are extracted.
4. The machine learning based face recognition system of claim 1, wherein: the depth feature extraction adopts an ArcFace algorithm and a SphereFace algorithm;
the ArcFace algorithm generates a face feature vector with discrimination by introducing an angle cosine loss function and a network structure, and realizes face feature extraction by maximizing the distance between feature vectors of the same person and minimizing the distance between different persons;
the SphereFace algorithm generates a robust face feature representation by performing spherical mapping on feature vectors in a feature space, and realizes face feature discrimination by maximizing the angular interval between feature vectors of the same person and minimizing the angular interval between different persons.
5. The machine learning based face recognition system of claim 1, wherein: the feature comparison algorithm comprises Euclidean distance and cosine similarity;
the Euclidean distance is used for measuring the difference or the similarity between the two face feature vectors by calculating the Euclidean distance between the two face feature vectors;
the cosine similarity is based on an included angle between two face feature vectors to measure the similarity between the two face feature vectors, and the value range of the cosine similarity is between-1 and 1;
The algorithm for identifying the decision adopts threshold setting and decision rules;
the recognition decision judges whether the two feature vectors are matched or similar by setting a threshold value;
and the decision rule is set according to the result of feature comparison and a threshold value, binary judgment is executed, if the decision rule is larger than the threshold value, matching is carried out, and otherwise, matching is not carried out.
6. The machine learning based face recognition system of claim 1, wherein: the data encryption specifically adopts a symmetric encryption algorithm, and AES encryption is carried out on the stored face image and data, so that confidentiality of the data in the storage and transmission processes is ensured;
the authority control comprises an identity authentication mechanism and an access authority strategy;
the authentication mechanism ensures that only authorized users can access the system and related data by requiring the users to provide authentication information;
the access right policy is set to be a hierarchical access right policy based on the user roles and the right level.
7. The machine learning based face recognition system of claim 1, wherein: the non-supervision learning adopts a self-supervision learning method, a self-supervision task and a learning target are set, positive and negative pairs of face images are generated in a mode of image rotation, clipping or color conversion, and key features of the face images are learned by a deep learning model through the learning target, so that the performance of the system on non-label data is improved;
The challenge training includes generating a challenge sample, a defensive mechanism;
the generated countermeasure sample is an input sample with misleading property generated by a countermeasure generation network, and the deep learning model can be used for identifying and countering attacks aiming at a face recognition system by generating the countermeasure sample, wherein the attacks comprise face camouflage and image tampering;
the defense mechanism is specifically input space random disturbance, and the input space random disturbance increases the difficulty of attack resistance by carrying out random disturbance or interference on an input image.
8. The machine learning based face recognition system of claim 1, wherein: the multi-modal feature fusion comprises a multi-modal fusion network, a weighted average fusion and a decision level fusion;
the multi-modal fusion network is specifically a multi-modal fusion convolutional neural network, the multi-modal fusion convolutional neural network adopts a plurality of branches, each branch processes the characteristics of one mode, and a plurality of mode information are integrated through network level fusion;
the weighted average fusion is used for setting weights for the features of different modes, and fusion is carried out in a weighted average mode;
after the multi-mode features are extracted through the decision-level fusion, the features of each mode are classified by using a plurality of single-mode classifiers, and the results of all the single-mode classifiers are synthesized through voting, weighted voting and integrated learning modes, so that a final fusion decision result is obtained.
9. The machine learning based face recognition system of claim 1, wherein: the interpretability analysis adopts a gradient importance method, a class activation diagram technology and a deep learning model with interpretation;
the gradient importance method specifically comprises the steps of using a gradient importance method comprising gradient x input to explain a decision process and an interested region of a model for inputting a face image;
the class activation map technology specifically adopts a Grad-CAM class activation map method, generates an activation map by calculating gradient weights on an input feature map to display a region concerned when a model identifies a human face, and understand the dependency degree of the model on important features and regions of the human face;
the interpreted deep learning model interprets the prediction results of the model by modeling the training process of the model and provides contribution analysis to the features.
10. The machine learning based face recognition system of claim 1, wherein: the federal learning adopts Federated Averaging algorithm and Secure Aggregation technology;
the Federated Averaging algorithm is used for carrying out local model training on distributed equipment, and aggregating model parameters to a central server for updating a global model;
The Secure Aggregation technology uses secure multiparty computation, and before the aggregation of model parameters is performed by the participants, the model parameters are firstly encrypted securely, computation is performed by a secure protocol, including homomorphic encryption and secret sharing, so as to obtain the aggregated model parameters without exposing individual data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310942847.0A CN117115881A (en) | 2023-07-28 | 2023-07-28 | Face recognition system based on machine learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310942847.0A CN117115881A (en) | 2023-07-28 | 2023-07-28 | Face recognition system based on machine learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117115881A true CN117115881A (en) | 2023-11-24 |
Family
ID=88799287
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310942847.0A Pending CN117115881A (en) | 2023-07-28 | 2023-07-28 | Face recognition system based on machine learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117115881A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117592555A (en) * | 2023-11-28 | 2024-02-23 | 中国医学科学院北京协和医院 | Federal learning method and system for multi-source heterogeneous medical data |
CN117911851A (en) * | 2024-03-19 | 2024-04-19 | 山东省物化探勘查院 | Underwater mapping real-time analysis method and system based on AI |
CN117992941A (en) * | 2024-04-02 | 2024-05-07 | 广东创能科技股份有限公司 | Method for monitoring login state of self-service terminal and actively protecting security |
-
2023
- 2023-07-28 CN CN202310942847.0A patent/CN117115881A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117592555A (en) * | 2023-11-28 | 2024-02-23 | 中国医学科学院北京协和医院 | Federal learning method and system for multi-source heterogeneous medical data |
CN117592555B (en) * | 2023-11-28 | 2024-05-10 | 中国医学科学院北京协和医院 | Federal learning method and system for multi-source heterogeneous medical data |
CN117911851A (en) * | 2024-03-19 | 2024-04-19 | 山东省物化探勘查院 | Underwater mapping real-time analysis method and system based on AI |
CN117911851B (en) * | 2024-03-19 | 2024-07-09 | 山东省物化探勘查院 | Underwater mapping real-time analysis method and system based on AI |
CN117992941A (en) * | 2024-04-02 | 2024-05-07 | 广东创能科技股份有限公司 | Method for monitoring login state of self-service terminal and actively protecting security |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117115881A (en) | Face recognition system based on machine learning | |
Mohsin et al. | Finger vein biometrics: taxonomy analysis, open challenges, future directions, and recommended solution for decentralised network architectures | |
Deb et al. | Look locally infer globally: A generalizable face anti-spoofing approach | |
Prakash et al. | A biometric approach for continuous user authentication by fusing hard and soft traits. | |
Abate et al. | On the impact of multimodal and multisensor biometrics in smart factories | |
Ryu et al. | Adversarial attacks by attaching noise markers on the face against deep face recognition | |
Ezz et al. | A silent password recognition framework based on lip analysis | |
Kwon et al. | CCTV-based multi-factor authentication system | |
CN112862491B (en) | Face payment security method and platform based on security unit and trusted execution environment | |
PK et al. | Fraud detection and prevention by face recognition with and without mask for banking application | |
Ashiba et al. | Implementation face based cancelable multi-biometric system | |
Kumari | ENHANCING PAYMENT SECURITY THROUGH THE IMPLEMENTATION OF DEEP LEARNING-BASED FACIAL RECOGNITION SYSTEMS IN MOBILE BANKING APPLICATIONS | |
KR101116737B1 (en) | System for watchlist identification | |
Volkova | Attacks on facial biometrics systems: an overview | |
Zheng et al. | A User Behavior-Based Random Distribution Scheme for Adversarial Example Generated CAPTCHA | |
Al-Sudani et al. | Hiding rfid in the image matching based access control to a smart building | |
CN106650349B (en) | A kind of pair of identity card uses the safe method being monitored | |
Pankajavalli et al. | Implementation of haar cascade classifier for vehicle security system based on face authentication using wireless networks | |
Annapurna et al. | Coordinate Access System for Live Video Acquisition | |
Ashwini et al. | Deep Biometrics: Exploring the Intersection of Deep Learning and Biometric Applications | |
Alshammari | E-passport security systems and attack implications | |
CN117992941B (en) | Method for monitoring login state of self-service terminal and actively protecting security | |
Mohite et al. | Additional Security in ATM Transactions Using Face Recognition and OTP Verification | |
Tambe-Jagtap | The Use of Biometrics in Digital Identity: Legal Implications for Governments | |
Aktaş | Vulnerabilities of Facial Recognition and Countermeasures |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |