CN112766422B - Privacy protection method based on lightweight face recognition model - Google Patents

Privacy protection method based on lightweight face recognition model Download PDF

Info

Publication number
CN112766422B
CN112766422B CN202110275875.2A CN202110275875A CN112766422B CN 112766422 B CN112766422 B CN 112766422B CN 202110275875 A CN202110275875 A CN 202110275875A CN 112766422 B CN112766422 B CN 112766422B
Authority
CN
China
Prior art keywords
network
training
model
data
lightweight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110275875.2A
Other languages
Chinese (zh)
Other versions
CN112766422A (en
Inventor
刘琚
赵雪圻
鲁昱
刘晓玺
张�杰
张昱
韩艳阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202110275875.2A priority Critical patent/CN112766422B/en
Publication of CN112766422A publication Critical patent/CN112766422A/en
Application granted granted Critical
Publication of CN112766422B publication Critical patent/CN112766422B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioethics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Image Analysis (AREA)

Abstract

In order to solve the problem that data are easily leaked in a training phase of a face recognition technology in practical application, the invention provides a privacy protection method based on a lightweight face recognition model. Secondly, to solve the problem that the network model parameters are easily attacked, the method designs use a voting mechanism and noise to increase the probability to protect the network parameters. Finally, the invention innovatively provides a 'teacher-student' fusion architecture based on a face recognition network to isolate sensitive and non-sensitive data. The method can accelerate the network operation speed, reduce the training parameter quantity and ensure the safety of the training stage and the reasoning stage under the condition of ensuring that the recognition rate is not reduced, and the method is fully proved to have practical value on the reference indexes of speed and accuracy.

Description

Privacy protection method based on lightweight face recognition model
Technical Field
The invention relates to a privacy protection method based on a lightweight face recognition model, and belongs to the technical field of image processing and privacy security protection.
Technical Field
The rapid development of science has brought about the change of the world-wide of the society today. Products such as smart phones and cloud processors which can meet the material and mental requirements of the society need to rely on a large amount of data for analysis and processing, and product enterprises applying the data do not reasonably protect and standardize personal user information. In view of the problem of data protection, a part related to technologies such as artificial intelligence and deep learning is urgently needed to be solved. The privacy protection under deep learning is usually to make changes to the model itself or the training mechanism, including design of the network structure, adjustment of the training method, and the like.
In the early stage of privacy protection technology, more direct protection methods such as encryption, masking and the like are mainly used, that is, protection is realized by shielding sensitive information or third party access is prevented by methods such as encryption and decryption, for example, RSA encryption technology can realize encryption by using a public key of a fixed length to protect data. With the development of the technology, the demand for a large amount of image or video data is increasing, the complexity of protecting the data by using a simple encryption and decryption method is high, and the deep learning algorithm is difficult to realize. Many new algorithms have been developed, one of which is differential privacy. The technology is firstly proposed to protect differential attack, namely an attack mode that an attacker uses limited times of inquiry to obtain partial information which is not disclosed by data, and differential privacy is to use a specific algorithm in a data set to achieve the purpose of increasing probability so as to protect training data. Later, with the improvement of the algorithm, the algorithm also can be applied to other privacy protection problems.
In the existing visual privacy protection technology, a common task is to protect and detect a face, because the face has the characteristic of uniquely identifying an individual under a special sample such as visual data, and the application range of face identification is widest. The face recognition under deep learning is the key point of practical application, the main application scene is monitoring security protection in a family scene or visual data protection under an intelligent application system, the data relates to the privacy of each person and each family, the privacy requirement is high, each large company pays more attention to the development of products such as intelligent home, the intelligent home can be widely applied and needs to be established on the basis of data analysis, and therefore the face recognition algorithm research under the scene has great practical application value.
Disclosure of Invention
The traditional face recognition algorithm network has the defects of large parameter quantity, difficulty in transplantation, wide application and the like, and has the risk of data leakage when a private data set is used for training. The invention provides a privacy protection method based on a face recognition network, and a network model designed by the method has the advantages of less parameters, easy convergence, high running speed and the like, improves the transplantation degree of the network, is convenient to be used in a large scale, and effectively protects training data by using a differential privacy training mechanism.
The technical scheme adopted by the invention is as follows:
a privacy protection method based on a lightweight face recognition model is characterized in that a partial linear mapping method is used for reducing the operation complexity, and a multilayer cascade lightweight network is used for extracting the features of a face data set; adding a 'teacher-student' fusion framework, training a plurality of teacher models, adding Laplacian noise, and introducing a voting mechanism to obtain labels marked to the student models so as to increase the probability of data; putting a newly obtained labeled non-private data set into a student model for training, and finally using the obtained student model as a final network model, wherein the method specifically comprises the following steps:
step (1): preparing a sensitive data set LFW for training a network, dividing a training set picture and a verification set picture into n parts, and regarding n-1 parts of data as sensitive data and regarding the other part as non-sensitive data;
step (2): generating n-1 improved lightweight networks, respectively putting training sets in n-1 parts of sensitive data into each network for training, dividing the lightweight network into four sub-modules during training, respectively extracting and fusing features, dividing the original features into two parts by each sub-module, generating a key feature map by using convolution operation on one part, performing simple mapping operation on the generated feature map by using linear transformation on the other part to generate other auxiliary feature maps, and fusing the two parts to obtain final features; obtaining an output calculation loss function through the whole network, and performing reverse derivation to finally obtain n-1 trained teacher models;
and (3): verifying by using n-1 verification sets which are divided, and observing whether the accuracy rate on the teacher model verification set meets the identification requirement;
and (4): preparing the rest non-sensitive data as input of all n-1 teacher models, sequentially putting each sample into the teacher models to obtain a prediction result, finding the sample class with the maximum voting number through a softmax function, adding Laplace noise, and taking the sample with the maximum probability as a data label of each sample;
and (5): preparing the obtained non-sensitive data and the label as a new training data set, regenerating the lightweight network in the step (2), retraining the non-sensitive data as the training set of the network, extracting the characteristics and fusing to finally obtain a student model;
and (6): and adding a face detection model to obtain face coordinates, and putting the student model into practical use.
Specifically, the proportion of the convolution operation part and the linear transformation part of the lightweight network in step (2) is as follows:
Figure GDA0003854453280000021
wherein m is the number of channels input by the lightweight network, s is the folding ratio, n is the number of channels output, and the operation complexity of the model is controlled by selecting the proportion of the linear combination according to the folding ratio.
Specifically, in step (4), the voting formula used is as follows:
n j (x)=|{i:i∈[n],f i (x)=j}|
in the formula, n j Representing the probability of being classified into class j, the meaning of the complete formula is the class of the sample for which the teacher model votes the most, followed by the addition of laplacian noise,
the final complete voting mechanism is as follows:
Figure GDA0003854453280000031
wherein,
Figure GDA0003854453280000032
representing the probability of being divided into j classes after adding laplacian noise with the parameter gamma, and the meaning of the complete formula is the class corresponding to j when the probability is maximum after adding laplacian noise.
It can be seen from the above solutions that the approach is quite different from other approaches of privacy protection technologies, and has the following advantages. First, in n-1 teacher models, the data is selected to use mutually exclusive data, and the use of the mutually exclusive data to protect the models reduces the utilization rate of the data, but less data is used to effectively protect and reduce the exposure risk. Secondly, voting mark prediction performed by using a teacher model actually belongs to a semi-supervised training model, the model protection is realized not in a training stage of the teacher model but in a fusion stage of forming a non-sensitive data set, and real data and an attacker are effectively isolated by using a voting mode in the stage, so that the real data and the attacker are relatively independent and cannot perform privacy attack.
In conclusion, the method effectively solves the problems that the arithmetic operation degree is complex and the training data is difficult to protect, increases the reasoning speed of the network, reduces the parameter quantity of the model and ensures the safety of the model.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a diagram of a lightweight network architecture of the present invention;
fig. 3 is a training diagram of the present invention.
Detailed Description
The invention provides a privacy protection method based on a lightweight face recognition model. On the basis of a traditional face recognition model, a lightweight network module is added, and a part of traditional convolution network operation is replaced by a linear mapping mode, so that the operation complexity is reduced, the fusion of features is guaranteed, the advantages of reducing network parameters and increasing the running speed of an inference stage are achieved, and the model is higher in practicability. And secondly, applying a 'teacher-student' fusion framework in an algorithm of the face recognition model, and generating a label of non-sensitive data by using a specific Laplacian noise and voting mechanism. And finally, the generated new data set is used as a training set of the student model, and the purpose of cutting off the association between the test set and the training set is achieved. Fig. 1 shows a flow chart of the method of the present invention, and the specific implementation steps are as follows:
(1) Dividing the sensitive data set into n parts;
(2) Respectively generating n neural networks as teacher models, performing convolution and linear transformation on each teacher model by using a lightweight network, and extracting deep features on the basis of not increasing algorithm complexity, wherein the specific process is as follows:
as shown in fig. 2, for each lightweight network module, firstly, convolution operation is used to perform feature extraction on the original input channel number, and convert the original input channel number into features of m channel numbers, then linear transformation with low algorithm complexity is performed on the features of m channel numbers to generate m (s-1) auxiliary feature maps, and then the features of the two parts are superimposed to obtain n feature results, namely ms.
As can be seen from fig. 2, the conventional feature extraction structure is a general convolution operation, the operation complexity is high, and the generated feature map has high redundancy and many repeatability. Through the lightweight network module, the more key main features are extracted from the traditional convolutional network operation, and then the main features are subjected to linear transformation, so that the algorithm complexity is reduced through the replacement. Then, a network model is built by using the lightweight network constructed by the transformation, and a final model structure is realized. The model is divided into four parts, each part uses different numbers of lightweight networks to extract features, and operations such as pooling and activating functions are further performed to further achieve the purposes of reducing redundancy and fitting data distribution.
(3) Arcface is used as a common loss function at the stage of output characterization. The function is a loss function commonly used for face recognition and is improved by a softmax function. The concrete expression of the softmax function is as follows:
Figure GDA0003854453280000041
firstly, the inner product in the formula is expressed by a module
Figure GDA0003854453280000042
Then, the weights and the inputs are respectively subjected to L2 regularization and multiplied by a scaling coefficient s to obtain a specific formula as follows:
Figure GDA0003854453280000043
for general binary classification, it is generally desirable
Figure GDA0003854453280000044
Therefore, | | W can be obtained from the inner product expression 1 ||||x||cosθ 1 >||W 2 ||||x||cosθ 2 . Therefore, in order to further constrain the function, a constant m is introduced as strict constraint to obtain | | | W 1 ||||x||cos(θ 1 +m)>||W 2 ||||x||cos(θ 2 + m). Finally, the expression of arcfacace can be obtained through the transformation of the function as follows:
Figure GDA0003854453280000045
(4) Performing reverse transfer by adding a loss function to the output result, and iteratively training m networks for multiple times to generate a final teacher model;
(5) The network model is trained by using a differential privacy-based algorithm, samples are labeled on a non-sensitive data set, voting label prediction by using a teacher model actually belongs to a semi-supervised training model, the model protection is realized not in a training stage of the teacher model but in a fusion stage of forming the non-sensitive data set, and real data and an attacker are effectively isolated by using a voting mode in the fusion stage, so that the real data and the attacker are relatively independent and cannot carry out privacy attack. The specific voting mode is as follows:
n j (x)=|{i:i∈[n],f i (x)=j}|
in the formula n j (x) Representing the probability of the input being classified into the j-th class, n representing the total teacher model, f i (x) The predictions made for the ith teacher model. Followed by the addition of laplace noise.
The final complete voting mechanism by adding noise is thus as follows:
Figure GDA0003854453280000051
the formula of f (x) means the category occupied by the label with the largest number of votes, and interference is increased by adding laplacian noise or gaussian noise and the like so as to improve safety.
(6) Training by using the obtained complete non-sensitive data set, and iterating for multiple times to obtain a final student model;
(7) When the system is put into practical family scene application, no matter a student model or a teacher model, the detected or cut faces are all faces, and when the system is actually applied to a network, the faces need to be detected because the student model is a complete natural image. The accurate face coordinates can be obtained through detection, and then the face coordinates can be recognized. In the face detection part, the design of the method is divided into four stages, namely an image size adjustment stage, a face detection candidate frame generation stage, a candidate frame filtering selection stage and a final boundary determination stage.
Firstly, in the image size adjustment stage, the network reduces and cuts an image into different detection reference frames by setting factors such as the minimum face size, the minimum detection size, the magnification factor and the like, so that reasonable detection can be conveniently carried out aiming at different face sizes.
Secondly, the selected images with different sizes are placed in a detection frame to generate a network. After extracting features through several layers of convolution operation, generating a judgment face branch, a face key point positioning branch and a bounding box regression branch. The data output by the network serves as a preliminary selection of the face detection box.
And then, the result output by the network generated by the detection frame further passes through a complex network, candidate frames are finely screened in the network, after the prediction frame with poor effect is filtered, boundary regression and deduplication operations are carried out on the left prediction candidate frame, and therefore the result of the candidate frame obtained by inputting has better effect and more characteristics can be reserved.
Finally, the network structure is determined by the bounding box. The network structure is more complex, with the goal of having more supervision to better regress the facial feature points. On the basis of keeping more characteristics, the information such as the coordinates of the detection frame and the face key points can be determined.
Therefore, accurate face detection coordinates are obtained by separately performing operations such as boundary regression and feature point positioning through three network structures with different functions.
(8) After the coordinates are obtained through the detection of the face, the face recognition is carried out by using a lightweight network of the student model, so that the security of data privacy can be guaranteed on the basis of unchanged accuracy rate.
The training data set used in the application is CASIA-WebFace, which has 10575 subjects and 494414 pictures, and is a large face recognition data set in the industry at present. In the test set LFW, the data provider has given samples and labels matching pairs of faces, so the accuracy can be tested directly. In experimental training, a training data set is divided into data for n =4,6,8,10. One of the data is selected as non-sensitive data, namely, the data is used as a test set. The labels of this type of data are obtained through a voting mechanism in the teacher model for learning as a student model. The data is also divided correspondingly in the LFW test set. And in the selection of the privacy noise parameter epsilon, the parameters used by the method comprise 0.6,2.04,5.04,8.03 and the like. The privacy cost represents the magnitude of the impact on the training of the original data after noise is added, so the larger the choice of epsilon, the better the protectiveness of the data, and the worse the usability of the data. To be involved inThe number delta is selected so that 10 is used -5 . Table 1 compares the face recognition model algorithm in terms of the parameter amount and the speed, and the parameter amount and the speed are greatly reduced in the case where the recognition rate is almost unchanged. Table 2 analyzes the recognition rate in terms of the number of different teacher models and the noise parameter size. It is found that when the data amount reaches a certain requirement, the influence of noise gradually decreases as the number of teacher models increases.
TABLE 1
Figure GDA0003854453280000061
TABLE 2
Figure GDA0003854453280000071

Claims (3)

1. A privacy protection method based on a lightweight face recognition model is characterized in that a partial linear mapping method is used for reducing the operation complexity, and a multilayer cascade lightweight network is used for extracting the features of a face data set; adding a 'teacher-student' fusion framework, training a plurality of teacher models, adding Laplace noise, and introducing a voting mechanism to obtain labels labeled to the student models so as to increase the probability of data; putting a newly obtained labeled non-private data set into a student model for training, and finally using the obtained student model as a final network model, wherein the method specifically comprises the following steps:
step (1): preparing a sensitive data set LFW for training a network, dividing a training set picture and a verification set picture into n parts, and regarding n-1 parts of data as sensitive data and the other part as non-sensitive data;
step (2): generating n-1 improved lightweight networks, respectively putting training sets in n-1 parts of sensitive data into each network for training, dividing the lightweight network into four sub-modules during training, respectively extracting and fusing features, dividing the original features into two parts by each sub-module, generating a key feature map by using convolution operation on one part, performing simple mapping operation on the generated feature map by using linear transformation on the other part to generate other auxiliary feature maps, and fusing the two parts to obtain final features; obtaining an output calculation loss function through the whole network, and performing reverse derivation to finally obtain n-1 trained teacher models;
and (3): verifying by using n-1 verification sets which are divided, and observing whether the accuracy rate on the teacher model verification set meets the identification requirement;
and (4): preparing the rest non-sensitive data as input of all the n-1 teacher models, sequentially putting each sample into the teacher models to obtain a prediction result, finding the sample class with the maximum voting number through a softmax function, adding Laplace noise, and taking the sample with the maximum probability as a data label of each sample;
and (5): preparing the obtained non-sensitive data and the label as a new training data set, regenerating the lightweight network in the step (2), retraining the non-sensitive data as the training set of the network, extracting the characteristics and fusing to finally obtain a student model;
and (6): and adding a face detection model, acquiring face coordinates, and then carrying out face recognition on the student model.
2. The privacy protection method based on the lightweight face recognition model according to claim 1, characterized in that: the proportion of the convolution operation part and the linear transformation part of the lightweight network in the step (2) is as follows:
Figure FDA0003854453270000011
wherein m is the number of channels input by the lightweight network, s is the folding ratio, n is the number of output channels, and the operation complexity of the model is controlled by selecting the proportion of the linear combination according to the folding ratio.
3. The privacy protection method based on the lightweight face recognition model as claimed in claim 1, characterized in that: in step (4), the voting formula used is as follows:
n j (x)=|{i:i∈[n],f i (x)=j}|
in the formula, n j Representing the probability of being divided into j classes, meaning the class of the sample for which the teacher model votes the most, followed by the addition of laplacian noise,
the final complete voting mechanism is as follows:
Figure FDA0003854453270000021
wherein,
Figure FDA0003854453270000022
representing the probability of being divided into j classes after adding laplacian noise with the parameter gamma, and the meaning of the complete formula is the class corresponding to j when the probability is maximum after adding laplacian noise.
CN202110275875.2A 2021-03-15 2021-03-15 Privacy protection method based on lightweight face recognition model Active CN112766422B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110275875.2A CN112766422B (en) 2021-03-15 2021-03-15 Privacy protection method based on lightweight face recognition model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110275875.2A CN112766422B (en) 2021-03-15 2021-03-15 Privacy protection method based on lightweight face recognition model

Publications (2)

Publication Number Publication Date
CN112766422A CN112766422A (en) 2021-05-07
CN112766422B true CN112766422B (en) 2022-11-15

Family

ID=75691234

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110275875.2A Active CN112766422B (en) 2021-03-15 2021-03-15 Privacy protection method based on lightweight face recognition model

Country Status (1)

Country Link
CN (1) CN112766422B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723238B (en) * 2021-08-18 2024-02-09 厦门瑞为信息技术有限公司 Face lightweight network model construction method and face recognition method
CN113642717B (en) * 2021-08-31 2024-04-02 西安理工大学 Convolutional neural network training method based on differential privacy
CN114220137A (en) * 2021-11-08 2022-03-22 南京理工大学 Privacy protection face recognition method based on MindSpore
CN115082800B (en) * 2022-07-21 2022-11-15 阿里巴巴达摩院(杭州)科技有限公司 Image segmentation method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598603A (en) * 2019-09-02 2019-12-20 深圳力维智联技术有限公司 Face recognition model acquisition method, device, equipment and medium
CN112016674A (en) * 2020-07-29 2020-12-01 魔门塔(苏州)科技有限公司 Knowledge distillation-based convolutional neural network quantification method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543606B (en) * 2018-11-22 2022-09-27 中山大学 Human face recognition method with attention mechanism
US11755743B2 (en) * 2019-09-03 2023-09-12 Microsoft Technology Licensing, Llc Protecting machine learning models from privacy attacks
CN112199717B (en) * 2020-09-30 2024-03-22 中国科学院信息工程研究所 Privacy model training method and device based on small amount of public data
CN112329052B (en) * 2020-10-26 2024-08-06 哈尔滨工业大学(深圳) Model privacy protection method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598603A (en) * 2019-09-02 2019-12-20 深圳力维智联技术有限公司 Face recognition model acquisition method, device, equipment and medium
CN112016674A (en) * 2020-07-29 2020-12-01 魔门塔(苏州)科技有限公司 Knowledge distillation-based convolutional neural network quantification method

Also Published As

Publication number Publication date
CN112766422A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN112766422B (en) Privacy protection method based on lightweight face recognition model
CN107704877B (en) Image privacy perception method based on deep learning
Wang et al. Deepvid: Deep visual interpretation and diagnosis for image classifiers via knowledge distillation
Williford et al. Explainable face recognition
CN109359541A (en) A kind of sketch face identification method based on depth migration study
CN112395442B (en) Automatic identification and content filtering method for popular pictures on mobile internet
CN111723395B (en) Portrait biological characteristic privacy protection and decryption method
Chen et al. Attentive semantic exploring for manipulated face detection
Anda et al. DeepUAge: improving underage age estimation accuracy to aid CSEM investigation
Hu et al. Single sample face recognition under varying illumination via QRCP decomposition
CN111832573A (en) Image emotion classification method based on class activation mapping and visual saliency
CN113449785A (en) Eyelid tumor digital pathological section image multi-classification method based on deep learning
Nie et al. Adap-EMD: Adaptive EMD for aircraft fine-grained classification in remote sensing
CN111126155A (en) Pedestrian re-identification method for generating confrontation network based on semantic constraint
Saravanan et al. Using machine learning principles, the classification method for face spoof detection in artificial neural networks
Sowmya et al. Convolutional Neural Network (CNN) Fundamental Operational Survey
Gadge et al. Recognition of Indian Sign Language characters using convolutional neural network
CN109871835B (en) Face recognition method based on mutual exclusion regularization technology
Liu et al. Embedded autoencoders: A novel framework for face de-identification
Awad et al. Development of automatic obscene images filtering using deep learning
Yin et al. A hybrid loss network for localization of image manipulation
Saluja et al. The Implications of Using Artificial Intelligence (AI) for Facial Analysis and Recognition
CN117197816B (en) User material identification method and system
CN117610080B (en) Medical image desensitizing method based on information bottleneck
Vignesh et al. Face Mask Attendance System Based On Image Recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant