CN115168633A - Face recognition privacy protection method capable of realizing strong scrambling - Google Patents

Face recognition privacy protection method capable of realizing strong scrambling Download PDF

Info

Publication number
CN115168633A
CN115168633A CN202210858828.5A CN202210858828A CN115168633A CN 115168633 A CN115168633 A CN 115168633A CN 202210858828 A CN202210858828 A CN 202210858828A CN 115168633 A CN115168633 A CN 115168633A
Authority
CN
China
Prior art keywords
face
privacy protection
face image
module
privacy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210858828.5A
Other languages
Chinese (zh)
Inventor
吴震东
黄炎华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202210858828.5A priority Critical patent/CN115168633A/en
Publication of CN115168633A publication Critical patent/CN115168633A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention relates to a face recognition privacy protection method capable of realizing strong scrambling. The method comprises the steps of performing end-to-end training by combining two deep neural networks E1 and G1, and then separating to obtain a privacy face protection module E and a privacy face recognition module G, wherein E1 is a new network obtained by adding a convolution layer and modifying an output layer to the deep neural network UNet, and G1 is a Resnet residual error network. The module E can directly modify the face data to generate a face image with high ambiguity which cannot be identified by human vision; and the module G identifies the face image subjected to face privacy protection processing. The invention enhances the recognition rate of the face after privacy protection, and directly uses the privacy protection face image in the face database to compare and recognize with the transmitted face image without decryption process in the recognition process, thereby realizing the face privacy protection. The face image cannot be restored reversely, and the safety of the face database at the server side is effectively protected.

Description

Face recognition privacy protection method capable of realizing strong scrambling
Technical Field
The invention belongs to the technical field of face privacy protection and face recognition, and relates to a face recognition privacy protection method capable of realizing strong scrambling.
Background
In the current big data age, with the rapid development of technologies, the field of face recognition is gradually perfected, various face recognition technologies are used in various fields in life, but the wide use of face recognition systems also brings a series of problems. At present, the privacy protection problem of the human face is widely mentioned, and the data of the human face recognition is very important, and the data are directly related to the property security of the individual. However, in order to rapidly advance the face recognition technology, the face database is widely collected, which brings many security risks such as over-collection, improper use of the database, theft of the database, and the like, which damage the privacy of the individual. Under the environment, the privacy protection of the human face is very important.
In the face recognition system, a face image needs to be transmitted into a face database of a server side and a face database of the server side through acquisition equipment for comparison and recognition. In the process, some attempts are made to protect the face privacy through an encryption and decryption method, namely, a face database of the server is encrypted, and a recognizable face image transmitted to the server is also encrypted, but in a specific recognition process, the transmitted face and the face in the face database need to be decrypted and then recognized. In this case, the face database of the server side risks privacy disclosure.
The privacy protection of the existing face is mostly to delete the face image or soft biological characteristic information in the feature vector of the face image, such as gender, age, identity and the like, through a deep neural network. Although the methods protect the privacy of the face image to a certain degree, the degree of protection of the privacy of the face is limited, and the methods only have an auxiliary effect in the field of protection of the face recognition privacy.
The invention provides a face recognition privacy protection method capable of realizing strong scrambling, which can better combine face privacy protection and face recognition technology. The face image is processed through the deep neural network, so that the face image cannot be recognized through human vision, but the corresponding face can be normally recognized through the specially designed neural network model, and the problem of face privacy disclosure is solved.
Disclosure of Invention
The invention aims to provide a face recognition privacy protection method capable of realizing strong scrambling.
The method comprises a privacy protection face generation module E and a privacy protection face recognition module G, wherein both the E and the G are deep neural networks.
The modules E and G are two networks obtained by separating after end-to-end training is carried out by combining two deep neural networks E1 and G1. E1 is a new network obtained by the deep neural network UNet by adding a convolutional layer and modifying an output layer. G1 is the Resnet residual network.
And the module E is used for directly modifying the face data and generating a face image with high ambiguity which cannot be identified by human vision.
The module G is used for identifying the face image subjected to face privacy protection processing.
The method specifically comprises the following steps:
firstly, selecting face image data from a face image database, and carrying out face positioning preprocessing operation on the face image data.
And secondly, pre-training the face image data processed in the first step in G1.
And thirdly, sending the face database into the E1+ G1 large network for end-to-end training, and separating the whole large network after the training is finished. The specific operation is as follows: and cutting off the last part of the convolution layer of the E1 and reserving the deep neural network of other layers to be used as a privacy protection face generation module E, and adding the last layer of the convolution layer of the E1 into the G1 to be used as a privacy protection face recognition module G.
And fourthly, sending the face image into the privacy protection face generation module E obtained in the third step, outputting the face image after privacy protection and storing the face image in a server side to serve as a privacy protection face template database. In the identification process, the privacy protection face image is compared with a privacy protection face template database through a privacy protection identification module G to carry out face identification.
In the third step, the E1+ G1 carry-on terminalTraining loss function to the end
Figure BDA0003755471190000021
In the back propagation of the loss function, only the parameters in G1 are modified, and E1 adjusts the parameters of the BN layer (Batch Normal, a common structure in the field of deep neural network learning) only by self-learning. Namely, the purpose of G1 learning is to identify the face image after passing through E1, and E1 mainly obtains the statistical mean and variance data in the training stage in the BatchNormal layer. The other neuron parameters of the E1 network are initial value parameters which can be regarded as random numbers without modification.
Although the face biological characteristic information is protected when the face image only passes through E1, the face privacy protection effect is better in order to enhance the blurring degree of the face image. The intermediate layer human face is proposed from E1, and although the size of the picture is enlarged, the protection effect of the privacy of the human face is better. The face image ambiguity F =1-histogram (x, x '), wherein histogram (x, x') refers to histogram algorithm, and the histogram algorithm is a general algorithm in the field of image processing. The process of extracting the middle layer face is that the fuzziness F of the proposed middle layer face image is set to be more than or equal to omega 2, omega 2 is set manually by a user according to the actual situation, and the value of the embodiment is 85%; if the proposed face image fuzziness does not meet the requirement, extracting a face image output by a layer before the middle layer, inspecting whether the fuzziness meets the requirement, if not, extracting a layer of face image forward, inspecting whether the fuzziness meets the requirement, if not, resetting the network initial value, and repeating the third step.
And (5) verifying the effect of E, wherein the human vision of the face image processed by E cannot normally identify the biological characteristic information of the face image, and the fuzziness F of the face image is more than or equal to omega 2.
Setting the accuracy rate to be larger than or equal to omega 1 when G1 carries out face recognition, wherein omega 1 is manually set by a user according to the actual situation; and (5) verifying the effect of G, and identifying the privacy-protected face image after E on G with the accuracy rate larger than or equal to omega 1.
The invention identifies the face after privacy protection, and has the following advantages compared with the prior face privacy protection and face identification technology while ensuring the accuracy:
1. the face privacy protection and face recognition technology are combined well. In the traditional face recognition privacy protection method, an image obtained by encrypting or weakly scrambling a face image is stored in a face database, but the image needs to be recognized after being decrypted during recognition, namely, the face privacy protection process and the face recognition process are separately performed. According to the invention, the two deep neural networks of the combined privacy protection network and the face recognition network are subjected to combined training, and the module composition of the two networks is adjusted after training, so that the face privacy protection effect is better, and the face recognition rate after privacy protection is enhanced.
2. The face images are processed by the privacy face generation module and then transmitted to the server side to be recognized, the face images in the face database of the server side are all processed by privacy protection, the ambiguity is good, and human vision cannot be recognized normally. In the identification process, a decryption process is not needed, the privacy protection face image in the face database and the transmitted face image are directly used for comparison and identification, and the task of face privacy protection is well achieved.
3. The face image after face privacy protection can not be restored reversely, the safety of the face database at the server side is effectively protected, and even if the face image is stolen by a third party, the phenomena of original face image leakage and the like which damage the privacy are not easily caused.
Drawings
FIG. 1 is a comparison diagram of a traditional face recognition system and a face recognition privacy protection system based on strong scrambling;
FIG. 2 is a diagram of the deep neural network E1 architecture of the present invention;
FIG. 3 (a) is a network architecture diagram of a privacy preserving face generation module E of the present invention;
FIG. 3 (b) is a diagram of a privacy preserving face recognition module G network architecture according to the present invention;
FIG. 4 is a face recognition privacy protection flow diagram;
FIG. 5 (a) is a flow chart of the privacy preserving face training process of FIG. 4;
fig. 5 (b) is a flow chart of the privacy preserving face recognition with front and back ends separated in fig. 4;
FIG. 6 (a) is a diagram of an original face in the example;
FIG. 6 (b) is a face image of an original face after E1 operation;
fig. 6 (c) is a face image of the original face after E operation.
Detailed Description
For a better understanding of the present invention, certain detailed embodiments and specific procedures of operation of the present invention are described below in conjunction with the accompanying drawings:
as shown in FIG. 1, a face recognition privacy protection method capable of realizing strong scrambling includes two deep neural networks E1 and G1 in a training stage. And after the training is finished, the system comprises a privacy face generation module E and a privacy face recognition module G. The E1 network architecture diagram is shown in fig. 2, and the E and G network architecture diagrams are shown in fig. 3 (a) and 3 (b).
As shown in fig. 4, the present embodiment includes face privacy protection recognition training and front-end and back-end separated privacy protection face recognition;
the face privacy protection and recognition training method specifically comprises the following steps:
as shown in fig. 5 (a), E1 uses a UNet network of a deep neural network, changes the output of UNet into a color image with the same size as the original image, and adds a convolution layer conv1 before the last layer conv of UNet, where UNet is the deep neural network of UNet with the last layer removed. Namely: unet (x) = conv (Unetc (x)), x1= E1 (x) = conv (conv 1 (Unetc (x)));
as shown in fig. 6 (a) and 6 (b), the face image x is x1 after the face image x is trained by E1.
G1, carrying out face recognition on a normal face (a face without scrambling) after pre-training by adopting a Resnet network of a residual error network structure of a deep neural network, and specifically comprising the following steps:
s1, preprocessing a face image, namely, cutting a face of a public face data set such as Labled Faces in the Wild (LFW) in the embodiment, and using a general face alignment method in the field, such as using an MTCNN (multiple term neural network) which is a general face alignment neural network in the field.
And S2, sending the preprocessed face image into a face recognition module G for pre-training.
And S3, sending the face data set into a large E1+ G1 network for end-to-end training, wherein the initialization parameters of the E1 network adopt random parameters.
And S4, after end-to-end training, generating a human face image which cannot be recognized by human eyes by E1, as shown in fig. 6 (b).
S5 and G1 may complete the face recognition task of fig. 6 (b).
The method has the advantages that the image fuzziness of the face image is enhanced, the size of the face image is properly enlarged, the convolution layer of the last part of the separation E1 is merged into G1, and the two new modules are changed into a privacy protection face generation module E and a privacy protection face recognition module G. The size of the face privacy protection image is enlarged to 3 times, and as a result, as shown in fig. 6 (c), it is obvious that the scrambling effect is better than that of fig. 6 (b).
E is a new deep neural network of the Unet after the last layer conv is segmented and a new convolution layer conv1 is added, in order to generate a face image x2 with better face privacy protection effect, where x2 is as shown in fig. 6 (c), the privacy protection face image ambiguity F (x 2, x) passing through E is greater than or equal to ω 2, and ω 2=85% in this embodiment. E = conv1 (uetc), x1= E1 (x) = conv (E (x)), x2= E (x);
g is a new deep neural network obtained by adding the last convolutional layer conv of Unet on the basis of Resnet. The G module can accurately identify the face of fig. 6 (c), and the accuracy is not less than ω 1 during identification, where ω 1 is not less than 98% in this embodiment. G = conv (G1), G1 (x 1) = G1 (E1 (x)) = G1 (conv (E (x))) = G (E (x)) = G (x 2).
The two modules are separated, E is used as the core content of the front end and is only responsible for the function of face privacy protection of the face image, G is used as the core content of the rear end, receives the privacy protection face image transmitted by the front end, and is matched with the existing privacy protection face in the face database to carry out face recognition.
As shown in fig. 5 (b), the front-end system includes a face image preprocessing module and a privacy protection face generating module E, and the core of the front-end system is the privacy protection face generating module E. The face image preprocessing module is used for carrying out preprocessing work such as clipping and positioning on the face, and the face image preprocessing module is used for generating a privacy protection face.
And the back-end system comprises a face database and a privacy protection face recognition module G, and the core of the back-end system is the privacy protection face recognition module G. And E, protecting the privacy of the human faces in the human face database. And G is used for identifying the face after privacy protection.
The privacy protection face recognition method with the separated front end and the back end comprises the following specific steps:
1. and obtaining a face image to be identified through a data acquisition camera.
2. The face image is sent to the front end.
3. The front-end system preprocesses the face image through a face image preprocessing module, and then sends the preprocessed face into a privacy protection face generation module E to generate a privacy protection face.
4. And transmitting the privacy protection face into a back-end system.
5. And the back end sends the introduced privacy protection face into a privacy face recognition module G and compares the privacy protection face with an existing privacy protection face database.
6. And after the result is obtained, if the matching with the face image in the face database is successful, returning successful and successfully matched face names to the front end, and if the matching is failed, returning failure.
7. And finally, performing backup processing on the matched human face, and storing the successfully matched time and name as the title of the picture in a back-end database.
The beneficial effects of the invention are as follows: directly starting from a human face image database, directly modifying the image into an image which can not normally identify the biological characteristics by human vision, and then carrying out human face identification. The behavior that the normal benefit is damaged, such as abuse of human face data by a third party after owning the database, is effectively prevented. Meanwhile, by using the related technology of the deep neural network instead of the encryption and decryption technology, an attacker cannot restore the original appearance of the face image aiming at the existing image of the face database, and the related content of the face privacy protection is effectively realized. Finally, related contents of face recognition are completed on the basis of face privacy protection, and experiments prove that the face recognition method has high recognition accuracy.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (2)

1. A face recognition privacy protection method capable of realizing strong scrambling is characterized by comprising the following steps: the system comprises a privacy protection face generation module E and a privacy face recognition module G, wherein both E and G are deep neural networks;
the modules E and G are two networks obtained by separating after end-to-end training is carried out by combining two deep neural networks E1 and G1; e1 is a new network obtained by adding a convolution layer and modifying an output layer to the deep neural network UNet, and G1 is a Resnet residual error network;
the module E is used for directly modifying the face data and generating a face image with high ambiguity which cannot be identified by human vision;
the module G is used for identifying a face image subjected to face privacy protection processing;
the method specifically comprises the following steps:
firstly, selecting face image data from a face image database, and carrying out face positioning preprocessing operation on the face image data;
secondly, pre-training the face image data processed in the first step in G1;
thirdly, sending the face database into the E1+ G1 large network for end-to-end training, and separating the whole large network after the training is finished; the specific operation is as follows: cutting off the last part of the convolution layer of E1 and reserving the deep neural network of other layers as a privacy protection face generation module E, and adding the last convolution layer of E1 into G1 as a privacy protection face recognition module G;
fourthly, sending the face image into the privacy protection face generation module E obtained in the third step, outputting the face image after privacy protection and storing the face image in a server side to serve as a privacy protection face template database; in the identification process, the privacy protection face image is compared with a privacy protection face template database through a privacy protection identification module G, and face identification is carried out.
2. The method of claim 1, wherein: in the third step, the E1+ G1 carries out end-to-end training loss function
Figure FDA0003755471180000011
Only modifying parameters in G1 in the back propagation of the loss function, wherein the initial parameters of E1 are random numbers, and the parameters of the BN layer are adjusted by E1 only through self-learning; extracting the face of the middle layer from the E1, and if the extracted face does not meet the image blurring degree F which is more than or equal to omega 2, extracting the face of the previous layer until the requirement is met; the face image blur degree F =1-histogram (x, x '), wherein histogram (x, x') refers to a histogram algorithm.
CN202210858828.5A 2022-07-20 2022-07-20 Face recognition privacy protection method capable of realizing strong scrambling Pending CN115168633A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210858828.5A CN115168633A (en) 2022-07-20 2022-07-20 Face recognition privacy protection method capable of realizing strong scrambling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210858828.5A CN115168633A (en) 2022-07-20 2022-07-20 Face recognition privacy protection method capable of realizing strong scrambling

Publications (1)

Publication Number Publication Date
CN115168633A true CN115168633A (en) 2022-10-11

Family

ID=83495434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210858828.5A Pending CN115168633A (en) 2022-07-20 2022-07-20 Face recognition privacy protection method capable of realizing strong scrambling

Country Status (1)

Country Link
CN (1) CN115168633A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116628660A (en) * 2023-05-26 2023-08-22 杭州电子科技大学 Personalized face biological key generation method based on deep neural network coding

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116628660A (en) * 2023-05-26 2023-08-22 杭州电子科技大学 Personalized face biological key generation method based on deep neural network coding
CN116628660B (en) * 2023-05-26 2024-01-30 杭州电子科技大学 Personalized face biological key generation method based on deep neural network coding

Similar Documents

Publication Publication Date Title
Manisha et al. Cancelable biometrics: a comprehensive survey
WO2019237846A1 (en) Image processing method and apparatus, face recognition method and apparatus, and computer device
Adler Sample images can be independently restored from face recognition templates
Wu et al. Privacy-protective-gan for face de-identification
EP2127200B1 (en) Method and system for biometric authentication and encryption
EP2795831B1 (en) Biometric identification using filtering and secure multi party computation
CN109657483B (en) Image encryption method and system
CN112699786B (en) Video behavior identification method and system based on space enhancement module
CN111680672B (en) Face living body detection method, system, device, computer equipment and storage medium
CN111507386B (en) Method and system for detecting encryption communication of storage file and network data stream
CN113033511B (en) Face anonymization method based on control decoupling identity representation
US11682194B2 (en) Training method for robust neural network based on feature matching
CN115168633A (en) Face recognition privacy protection method capable of realizing strong scrambling
Zhu et al. Deepfake detection with clustering-based embedding regularization
CN114036553A (en) K-anonymity-combined pedestrian identity privacy protection method
Ayoup et al. Selective cancellable multi-biometric template generation scheme based on multi-exposure feature fusion
Nowroozi et al. Detecting high-quality GAN-generated face images using neural networks
Jasmine et al. A privacy preserving based multi-biometric system for secure identification in cloud environment
CN111130794B (en) Identity verification method based on iris and private key certificate chain connection storage structure
Song et al. Learning structural similarity with evolutionary-GAN: A new face de-identification method
Liu et al. Erase and repair: An efficient box-free removal attack on high-capacity deep hiding
CN116311467A (en) Identity verification processing method and device based on face recognition
CN112561076B (en) Model processing method and device
Ahmad et al. Inverting biometric models with fewer samples: Incorporating the output of multiple models
CN115131465A (en) Identity relationship maintenance-based face anonymous image generation and identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination