CN114036553A - K-anonymity-combined pedestrian identity privacy protection method - Google Patents

K-anonymity-combined pedestrian identity privacy protection method Download PDF

Info

Publication number
CN114036553A
CN114036553A CN202111261508.3A CN202111261508A CN114036553A CN 114036553 A CN114036553 A CN 114036553A CN 202111261508 A CN202111261508 A CN 202111261508A CN 114036553 A CN114036553 A CN 114036553A
Authority
CN
China
Prior art keywords
pedestrian
image
identity
data set
anonymous
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111261508.3A
Other languages
Chinese (zh)
Inventor
匡振中
滕龙斌
陈超
俞俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202111261508.3A priority Critical patent/CN114036553A/en
Publication of CN114036553A publication Critical patent/CN114036553A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • G06F21/6254Protecting personal data, e.g. for financial or medical purposes by anonymising data, e.g. decorrelating personal data from the owner's identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention provides a pedestrian identity privacy protection method combining k anonymity. According to the method, the anonymous image with higher quality is generated through a cross identity training strategy; finally, through a designed k-anonymous privacy protection method, the privacy of the image data of the pedestrian is kept, and the usability of the data is also kept; the method comprises the following specific steps: step 1: collecting a proxy data set and preprocessing an image; step 2: establishing a k anonymity mechanism; and step 3: constructing an anonymous pedestrian generation confrontation network; and 4, step 4: generating an objective function for the anonymous pedestrian; and 5: and training and testing by adopting the public data set, and outputting a final result. The invention not only keeps the identity privacy of the pedestrian, but also keeps the attribute. In the aspect of anonymous pedestrian generation, on one hand, the method combines the fusion of the attributes and the target background into the pedestrian generation process, and on the other hand, the method provides a cross identity training strategy, so that the quality of the generated image is improved.

Description

K-anonymity-combined pedestrian identity privacy protection method
Technical Field
The invention relates to the field of image privacy protection, and provides a pedestrian identity privacy protection method combining k anonymity.
Background
As cameras are increasingly deployed in public places and deep neural networks are continuously developed, it becomes increasingly challenging for face and pedestrian recognition algorithms to allow privacy-related information to be more reliably recognized than ever before, and to avoid privacy disclosure risks.
Most of the existing privacy protection methods focus on the face anonymity, and the particularity of the privacy of pedestrians is ignored, so that a privacy protection method for the pedestrians is urgently needed to be provided. In order to solve the privacy disclosure risk, it is necessary to provide a reliable identity obfuscation method. Ideally, such an approach should not only effectively hide identity information, but also preserve data availability. In the method, the pedestrian privacy protection aims to avoid revealing the personal identity in the data and protect the data availability, including attributes, behaviors and the like. Unlike human faces, both biological and non-biological may cause leakage of pedestrian information (i.e., identity), so we need to anonymize pedestrians as a whole.
The traditional visual encryption algorithm directly applied to the image protects identity privacy, but completely eliminates the region of interest, so that the data availability of the encrypted region is greatly reduced. With the continuous development of the neural network, the hidden information of the fuzzy and mosaic image privacy protection methods can be recovered by the neural network, so that privacy is revealed. Privacy preserving methods based on deep learning, such as CIAGAN, an anonymization method that can be used for image and video, condition-based generation of countermeasure networks, can eliminate facial and body features while generating images or video that can be used for detection or tracking tasks. But the pedestrian images generated by CIAGAN are less realistic on the pedestrian data set. The pedestrian re-recognition model DG-Net available for identity exchange and the pat of pedestrian pose conversion can generate high quality pedestrian images, but neither is suitable for pedestrian anonymity. Wherein DG-Net can not anonymize the texture of the pedestrian garment, and PATN can not retain the background. In addition, the K-Same-Net combines a K anonymization method and a generative neural network to generate an anonymization face. Wherein K-Same-Net enables K-anonymity by mapping at least K individuals' identities to the Same identity.
Disclosure of Invention
The invention aims to provide a pedestrian identity privacy protection method combining k anonymity, aiming at the defects of the prior art. The invention firstly provides a brand-new pedestrian anonymity model PPAGAN for pedestrian anonymity; secondly, generating an anonymous image with higher quality through a cross identity training strategy; finally, through a designed k-anonymous privacy protection method, the privacy of the image data of the pedestrian is kept, and the usability of the data is also kept; the method comprises the following concrete steps:
step 1: collecting a proxy data set and preprocessing an image;
step 2: establishing a k anonymity mechanism;
and step 3: constructing an anonymous pedestrian generation confrontation network;
and 4, step 4: generating an objective function for the anonymous pedestrian;
and 5: and training and testing by adopting the public data set, and outputting a final result.
Further, the step 1 of proxy data set acquisition and image preprocessing specifically comprises the following steps:
1-1, collecting an agent data set, and collecting privacy insensitive pedestrian images as the agent data set.
And 1-2, image marking, namely marking the identity and the attribute of the pedestrian image to generate an image label.
And 1-3, extracting features, namely extracting the image features of the pedestrians by adopting a pre-trained pedestrian re-recognition model.
And 1-4, analyzing the image by using a pedestrian posture analyzer to obtain the pedestrian posture.
And 1-5, obtaining a pedestrian region mask image by using an example segmentation model.
Further, the step 2 establishes a k anonymity mechanism, and specifically comprises the following steps:
and 2-1, calculating the identity characteristics of the pedestrian.
The average characteristic of the pedestrian is used as the identity characteristic of the pedestrian, and the following formula is specifically adopted:
Figure BDA0003325907130000021
wherein, FiIs the identity of a pedestrian of identity i, NiIs the number of pedestrian images with identity i,
Figure BDA0003325907130000022
is the image feature of the jth image of the pedestrian with identity i.
The average characteristic refers to an average value of a plurality of image characteristics solved by a plurality of pedestrian images corresponding to one pedestrian;
and 2-2. clustering and grouping identities.
And grouping the pedestrians according to the attributes, and then performing characteristic grouping under each attribute grouping.
And clustering the identity characteristics of the pedestrians by using a k-means variant algorithm in the characteristic grouping, and enabling the number of the pedestrians in each clustered group to be the same to obtain a clustering center. And the same agent pedestrian is used for k pedestrians in the same group so as to realize the k anonymity theory.
And 2-3, mapping the pedestrian agent identity.
Firstly, carrying out identity clustering grouping on a data set to be anonymous. And secondly, mapping the clustering center of the data set to be anonymous to an agent data set, and taking the mapping relation at the minimum mapping distance as a target to obtain the pedestrian agent identity mapping. The mapping distance specifically adopts the following formula:
Figure BDA0003325907130000031
wherein DMRepresenting the mapping distance, n representing the number of clusters,
Figure BDA0003325907130000032
representing the identity of the pedestrian in the ith cluster center of the data set to be anonymized, fiRepresenting the identity of the agent pedestrian of the ith cluster-center map,
Figure BDA0003325907130000033
representing identity fiThe pedestrian identity of the agent pedestrian.
Further, in step 3, an Anonymous pedestrian generation countermeasure Network (PPAGAN) is constructed, and the specific steps are as follows:
and 3-1, constructing a generator.
The generator aims at learning the source image ISTo generation of image IGAnd having generated the graph pose KGAnd target chart posture KTThe characteristics of (A) are the same. A gesture-attention transfer block (PATB) is used as a generator in PPAGAN. And the plurality of gesture-attention transfer blocks are cascaded; starting from the initial image feature and pose feature, the plurality of PATBs update both features step by step. The final output of PATB decodes the final image features through multiple deconvolution layers and one convolution layer to obtain the generated image IGWhile discarding the final pose features. The PPAGAN uses 9 PATBs, and a PATB generator in which image features and pose features are input in cascade is extracted by a convolutional layer and a fully-connected layer.
The initial image characteristic is a source image ISThe image characteristics of (1); gesture features include generating a graph gesture KGAnd target chart posture KTThe characteristics of (1).
3-2, constructing a discriminator.
The discriminator includes an image discriminator DIAnd posture discriminator DKWherein D isIAuthenticating the authenticity of the input image and the similarity between the input image and the input attribute, DKThe similarity between the input image and the input gesture is discriminated. DIIncludes (target image, attribute) and (generated image, attribute) duplets, and judges that the former is true and the latter is false. DKIncludes a (target image, pose) tuple and a (generated image, pose) tuple, and determines that the former is true and the latter is false. DIThe medium image features and the attribute features are fused with the full-link layer through the convolution layer, and the final image fidelity SIBy making use of an image discriminator DIThe fused image features of (1) are input into 3 residual blocks. Inputting the superposed features of the posture image and the pedestrian image into 1 downsampling convolution layer and 3 residual error blocks to obtain the posture truth degree SK. Finally, the image fidelity SIDegree of sum posture reality SKCombining: s ═ SISK
Further, the step 4 of generating the objective function for the anonymous pedestrian includes the following specific steps:
4-1, combining all pedestrians to generate an objective function, wherein the specific formula is as follows:
L=λ1LGAN2LI3LF
wherein λ is1Is the weight of the objective function of GAN, λ2Reconstructing the weight of the lost objective function, λ3Is the weight of the identity cross-feature loss function. Wherein λ1=10,λ2=10,λ3=1。
4-2. goal function of GAN.
The core idea of GAN is the antagonistic game between the generator and the discriminator. The goal of the generator is to generate a true image that cannot be discerned by the discriminator. The goal of the discriminator is to discriminate whether the image was generated by the generator, which can be expressed by the following equation:
Figure BDA0003325907130000041
wherein, IS、ITAnd IGAre respectively provided withRepresenting a source image, a target image, a generated image, BTRepresenting the object background, A representing the attribute, KSAnd KTRepresenting a source gesture and a target gesture, respectively.
4-3. reconstructing the loss objective function.
The reconstructed target loss of the PPAGAN adopts the pixel-level loss of a pedestrian region, and allows the network to pay more attention to the generation and the fusion with the background of a target person, wherein the following formula is specifically adopted:
LI=||MT⊙(IT-IG)||1 (4)
wherein, "indicates an element gradation, MTAnd (4) obtaining a binary pedestrian region mask.
4-4. identity cross-feature loss function.
And in the identity cross training stage, the inter-feature loss is adopted to restrain the generation of the identity cross pedestrians. Specifically, image features of pedestrians in the generated image are extracted through a pre-trained pedestrian re-recognition model, different identity cross feature loss functions are selected according to whether the identities of the pedestrians in the source image are consistent with the identities of the pedestrians in the target image, and the specific formula is as follows:
Figure BDA0003325907130000051
wherein, FSIs an image feature of a pedestrian in the source image, FTIs the image characteristic of a pedestrian in the target image, FGIs the image feature of the pedestrian in the generated image.
Further, the step 5 of training the model and testing data includes the following specific steps:
5-1, preparing a data set, for example, using a public pedestrian data set (Market1501) to obtain the labels, features, postures and pedestrian area masks of the images by preprocessing according to the step 1. And taking the training set as a proxy data set and the test set as a data set to be anonymous.
And 5-2, dividing the data set into a training set and a testing set, wherein the two sets have no identity repetition and are trained on the training set.
And 5-3, generating an anonymous pedestrian by using the pedestrian anonymous network trained in the step 3 and the step 4 and combining with the pedestrian agent identity mapping.
5-4, in order to verify the effectiveness of the proposed method, the anonymity rate (including the pedestrian re-identification rate) and the data availability rate (attribute retention rate, key point retention rate and image quality) are calculated by comparing the proposed method with the current excellent method.
The invention has the beneficial effects that:
in the aspect of pedestrian anonymity, the method groups the attributes of the pedestrians, and clusters the attributes in each attribute group according to the identity characteristics to generate the identity mapping of the pedestrian agent, so that a k anonymity mechanism is finally realized, and not only the identity privacy of the pedestrians but also the attributes are kept. In the aspect of anonymous pedestrian generation, on one hand, the method combines the fusion of the attributes and the target background into the pedestrian generation process, and on the other hand, the method provides a cross identity training strategy, so that the quality of the generated image is improved.
Drawings
FIG. 1 is a k-anonymity map;
FIG. 2 is an anonymization flow diagram;
table 1 shows the ablation experiments for the various modules of the method;
table 2 shows the results of the comparison of this method with other methods.
Detailed Description
The invention will be further explained with reference to the drawings.
Based on the pedestrian privacy protection technology for creating the countermeasure network, the k anonymization mechanism is shown in fig. 1, where k pedestrians in the input data set in fig. 1 are mapped onto one agent pedestrian, where k is 2. An ellipse represents a mapping relationship, and the pedestrian in the dashed box is the proxy pedestrian. FIG. 2 anonymous flow diagram, overall framework flow diagram is shown in FIG. 2, where ITRepresenting the target pedestrian to be anonymous, IPRepresenting a pedestrian in charge, IARepresenting anonymous pedestrians, BTRepresenting the target background, KPRepresenting proxy pedestrian posture, KTRepresenting a target pedestrian posture, A representing an attribute, FKRepresenting a posture feature, FIRepresenting image features, FFRepresenting features of the fused image, DKPresentation posture discriminator, DIRepresenting an image discriminator. The invention specifically realizes the following steps:
step 1: collecting a proxy data set and preprocessing an image;
step 2: establishing a k anonymity mechanism;
and step 3: constructing an anonymous pedestrian generation confrontation network;
and 4, step 4: generating an objective function for the anonymous pedestrian;
and 5: and training and testing by adopting the public data set, and outputting a final result.
Step 1, proxy data set acquisition and image preprocessing, which comprises the following specific steps:
1-1. agent data set collection, collecting privacy insensitive pedestrian images as an agent data set, which may be collected by public pedestrian data sets (e.g., Market1501) or recruited volunteers.
And 1-2, image marking, namely marking the identity and the attribute of the pedestrian image to generate an image label. For example, the identity of a pedestrian may be marked with n integers from 1 to n, the gender attribute may be marked with male 0 and female 1, and the coat may be marked with short sleeve 0 and long sleeve 1.
And 1-3, extracting the features, namely extracting the features of the pedestrian image by adopting a pedestrian re-identification model based on ResNet 50. Where the feature is the input before the last fully connected layer of ResNet50, the dimension is 2048.
And 1-4, analyzing the image by using a pedestrian gesture analyzer (such as OpenPose) to obtain a pedestrian gesture. Specifically, 18 key points are included, such as nose, ears, shoulders, etc.
1-5, obtaining a pedestrian region Mask image by using an example segmentation model (such as Mask RCNN). The pedestrian region mask image is a binary image, the length and the width of the image are consistent with those of the pedestrian image, 1 represents a pedestrian part, and 0 represents a background part.
Step 2, establishing a k anonymity mechanism, which comprises the following specific steps:
and 2-1, calculating the identity characteristics of the pedestrian. Calculating the features of all the images by using a pre-trained pedestrian re-recognition model, and then using the average features of the images of the pedestrians as the identity features of the pedestrians by specifically adopting the following formula:
Figure BDA0003325907130000071
wherein, FiIs the identity of a pedestrian with identity i, NiIs the number of pedestrian images with identity i,
Figure BDA0003325907130000072
is the characteristic of the jth image of the pedestrian with identity i.
And 2-2. clustering and grouping identities. And grouping the pedestrians according to the attributes, and then performing characteristic grouping under each attribute grouping. The feature grouping algorithm is realized by a k-means variant algorithm with the same cluster size, which is provided in ELKI open source data mining software, and the algorithm keeps the size of each cluster, reduces the overall variance through element exchange among the clusters, and finally converges to a better clustering group.
And 2-3, mapping the pedestrian agent identity. The pedestrian agent identity mapping refers to mapping f from a data aggregation class center to be anonymous to an agent data set identity, and the mapping distance specifically adopts the following formula:
Figure BDA0003325907130000073
wherein DMRepresenting the mapping distance, n representing the number of clusters,
Figure BDA0003325907130000074
representing the identity of the pedestrian in the ith cluster center of the data set to be anonymized, fiRepresenting the identity of the agent pedestrian of the ith cluster-center map,
Figure BDA0003325907130000075
representing identity fiPedestrian identity of the pedestrian is proxied. Final proxy identity mapping f is solved
Figure BDA0003325907130000081
Obtained and satisfied f for any i, j when solving the minimum mapping to satisfy the diversity of anonymous imagesi≠fj. When the number of clustering clusters under the same attribute grouping of the data set to be anonymized is not more than the identity number of the agent pedestrians under the attribute, the problem can be regarded as the maximum matching problem of weighted bipartite graphs, wherein the weight is the Euclidean distance of the identity characteristics of the pedestrians between the pedestrians at the clustering center and the agent pedestrians. And the above conditions can be satisfied by adjusting the value of k, so that the pedestrian agent identity mapping f can be solved by adopting the Hungarian algorithm.
Step 3, constructing an anonymous pedestrian generation confrontation network, which comprises the following specific steps:
and 3-1, generating a pedestrian process by the generator. The features input to the pose-attention transfer block (PATB) include fused image features and pose features, wherein the fused image features are obtained by attribute feature and image feature fusion. To obtain image features, we input source pedestrian and target background overlays into the convolutional layer on the color channel. To obtain the attribute signature, we input the attribute tag into 7 fully-connected layers. The fused image features are obtained by inputting features superimposed by the attribute features and the image features into the convolution layer. We stack thermodynamic diagrams of the source and target poses on the depth channel and input them into the convolutional layer to obtain pose features. The three convolutional layers have the same structure, are three downsampling convolutional layers and comprise a batch normalization layer and a ReLU activation function.
After the fused image features and pose features are obtained, the fused image features and pose features are input into a first PATB in 9 cascaded PATBs, the output of the last PATB is sequentially taken as the input of the next PATB, and finally the final PATB image features are decoded through 2 deconvolution layers and 1 convolution layer to obtain a decoded image Io. Generating an image IGBy MTFusion IoAnd a target background BTThe formula is as follows:
Figure BDA0003325907130000082
wherein M isTA two-valued pedestrian region mask 1 indicates a pedestrian, 0 indicates a background, a mobile is an element cascade,
Figure BDA0003325907130000083
is element level addition.
3-2, constructing a discriminator. The method comprises an image discriminator and a posture discriminator, and the structure of the image discriminator and the posture discriminator is a reference residual error network. Specifically, 1 down-sampling layer and 3 residual blocks are included, and finally, the output is carried out through a sigmoid function. The downsampled layer consists of 3 batch normalization layers, 3 ReLU layers, and 3 convolutional layers. The residual block is composed of 2 batch normalization layers, 1 ReLU layer, 2 convolution layers, and 1 DropOut layer, and adopts a residual connection structure like ResNet.
Step 4, generating an objective function for the anonymous pedestrian, which comprises the following specific steps:
4-1, combining all pedestrians to generate an objective function, wherein the specific formula is as follows:
L=λ1LGAN2LI3LF
wherein λ is1Is the weight of the objective function of GAN, λ2Reconstructing the weight of the lost objective function, λ3Is the weight of the identity cross-feature loss function. In training we take lambda1=10,λ2=10,λ3=1。
Step 5, training a model and testing data, and specifically comprises the following steps:
5-1. select the appropriate data set, then pre-process the data set as described in step 1. For example, a Market-1501 data set comprising 1501 pedestrian identities 32668 images may be used. The training set is used as a proxy data set, and the testing set is used as a data set to be anonymous.
And 5-2, training the anonymous pedestrian in the step 3 by using the objective function in the step 4 through the training set in the data set to generate a confrontation network. During training, an Adam optimizer is adopted, and the learning rate is set to be 0.0002, beta1Coefficient set to 0.5, beta2The factor was set to 0.999 and the batch size was 32. The network trains 700 rounds together, and the front 300 rounds do not comprise the bodyIdentity crossover was performed with a probability of 0.3 in the rear 400 rounds.
5-3. to verify the validity of the proposed method, compare it with traditional methods like blurring, pixelation, face removal, etc., and additionally compare it with deep learning methods like conditional identity anonymization to generate anti-network CIAGAN and DG-Net generated across identity pedestrians. To verify the validity of the fusion background and attributes, we performed ablation experiments. We used a method of calculating anonymity rate (including pedestrian re-identification rate) and data availability rate (attribute retention rate, keypoint retention rate and image quality) to evaluate us. The anonymity rate is evaluated by adopting a pre-trained pedestrian re-identification rate, and the higher the re-identification rate is, the lower the anonymity rate is; the attribute retention rate is evaluated by using a pre-trained attribute predictor; the key point retention rate adopts a pre-trained gesture recognition model OpenPose to recognize key points, and is measured by PCKh. Image quality is measured by Structural Similarity (SSIM) between the anonymous data set and the target data set.
Results of the experiment
1. Table 1 shows the ablation experiments of the modules of the method on the Market1501 data set.
2. Table 2 shows the comparison experiment results of the method on the Market1501 data set with the methods of blurring, pixelation, face deduction, DG-Net and CIAGAN on anonymity rate, attribute retention rate, key point retention rate and image quality index;
table 1 ablation experimental results of each module on the Market1501 data set by the method, a represents a baseline, B represents a fusion background, C represents a fusion attribute, D represents cross-identity training, and k is 10.
Figure BDA0003325907130000101
Table 2 comparison of experimental results of the method with other methods on the Market1501 data set, where k is 10
Figure BDA0003325907130000102

Claims (6)

1. A privacy protection method for a pedestrian identity combined with k anonymity is characterized in that a brand-new PPAGAN model for the pedestrian anonymity is provided firstly; secondly, generating an anonymous image with higher quality through a cross identity training strategy; finally, through a designed k-anonymous privacy protection method, the privacy of the image data of the pedestrian is kept, and the usability of the data is also kept; the method comprises the following concrete steps:
step 1: collecting a proxy data set and preprocessing an image;
step 2: establishing a k anonymity mechanism;
and step 3: constructing an anonymous pedestrian generation confrontation network;
and 4, step 4: generating an objective function for the anonymous pedestrian;
and 5: and training and testing by adopting the public data set, and outputting a final result.
2. The k-anonymity combined pedestrian identity privacy protection method according to claim 1, wherein step 1 proxy data set collection and image preprocessing specifically comprises the following steps:
1-1, collecting an agent data set, and collecting privacy insensitive pedestrian images as the agent data set;
1-2, image marking, namely marking the identity and the attribute of the pedestrian image to generate an image label;
1-3, extracting features, namely extracting the image features of the pedestrians by adopting a pre-trained pedestrian re-recognition model;
1-4, analyzing the image by using a pedestrian posture analyzer to obtain a pedestrian posture;
and 1-5, obtaining a pedestrian region mask image by using an example segmentation model.
3. The method for protecting privacy of pedestrian identity in combination with k anonymity according to claim 1 or 2, wherein the k anonymity mechanism is established in step 2, and the specific steps are as follows:
2-1, calculating the identity characteristics of the pedestrian;
the average characteristic of the pedestrian is used as the identity characteristic of the pedestrian, and the following formula is specifically adopted:
Figure FDA0003325907120000011
wherein, FiIs the identity of a pedestrian of identity i, NiIs the number of pedestrian images with identity i,
Figure FDA0003325907120000012
the image characteristics of the jth image of the pedestrian with the identity i;
the average characteristic refers to an average value of a plurality of image characteristics solved by a plurality of pedestrian images corresponding to one pedestrian;
2-2, clustering and grouping identities;
grouping the pedestrians according to the attributes, and then performing characteristic grouping under each attribute grouping;
clustering the identity characteristics of the pedestrians by using a k-means variant algorithm in the characteristic grouping, and enabling the number of the pedestrians in each clustered group to be the same to obtain a clustering center; the same agent pedestrian is used for k pedestrians in the same group to realize a k anonymity theory;
2-3, mapping the pedestrian agent identity;
firstly, carrying out identity clustering grouping on a data set to be anonymous; secondly, mapping the clustering center of the data set to be anonymous to an agent data set, and taking the mapping relation at the minimum mapping distance as a target to obtain pedestrian agent identity mapping; the mapping distance specifically adopts the following formula:
Figure FDA0003325907120000021
wherein DMRepresenting the mapping distance, n representing the number of clusters,
Figure FDA0003325907120000022
representing the identity of the pedestrian in the ith cluster center of the data set to be anonymized, fiRepresenting the identity of the agent pedestrian of the ith cluster-center map,
Figure FDA0003325907120000023
representing identity fiThe pedestrian identity of the agent pedestrian.
4. The k-anonymity combined pedestrian identity privacy protection method according to claim 3, wherein the step 3 of constructing an anonymous pedestrian generation countermeasure network comprises the following specific steps:
3-1, constructing a generator;
the generator aims at learning the source image ISTo generation of image IGAnd having generated the graph pose KGAnd target chart posture KTThe characteristics of the same; using a gesture-attention transfer block (PATB) as a generator in the PPAGAN; and the plurality of gesture-attention transfer blocks are cascaded; starting from the initial image feature and the pose feature, the plurality of PATBs update the two features step by step; the final output of PATB decodes the final image features through multiple deconvolution layers and one convolution layer to obtain the generated image IGWhile discarding the final pose features; a PATB generator for extracting image features and pose features input in cascade through convolution layers and full connection layers by using 9 PATBs in PPAGAN;
the initial image characteristic is a source image ISThe image characteristics of (1); gesture features include generating a graph gesture KGAnd target chart posture KTThe features of (1);
3-2, constructing a discriminator;
the discriminator includes an image discriminator DIAnd posture discriminator DKWherein D isIAuthenticating the authenticity of the input image and the similarity between the input image and the input attribute, DKJudging the similarity between the input image and the input gesture; dIThe input of (target image, attribute) binary group and (generating image, attribute) binary group, and judging the former is true and the latter is false; dKThe input includes (target image, gesture) binary and (generation image, gesture)) Determining the binary group as true and false; dIThe medium image features and the attribute features are fused with the full-link layer through the convolution layer, and the final image fidelity SIBy making use of an image discriminator DIThe fused image features in (1) are input into 3 residual blocks to obtain; inputting the superposed features of the posture image and the pedestrian image into 1 downsampling convolution layer and 3 residual error blocks to obtain the posture truth degree SK(ii) a Finally, the image fidelity SIDegree of sum posture reality SKCombining: s ═ SISK
5. The k-anonymized pedestrian identity privacy protection method according to claim 1, wherein the step 4 anonymized pedestrian generates an objective function, specifically comprising the steps of:
4-1, combining all pedestrians to generate an objective function, wherein the specific formula is as follows:
L=λ1LGAN2LI3LF
wherein λ is1Is the weight of the objective function of GAN, λ2Reconstructing the weight of the lost objective function, λ3Is the weight of the identity cross-feature loss function; wherein λ1=10,λ2=10,λ3=1;
4-2. goal function of GAN;
the core idea of GAN is a antagonistic game between the generator and the discriminator; the generator aims to generate a real image which cannot be distinguished by the discriminator; the goal of the discriminator is to discriminate whether the image was generated by the generator, which can be expressed by the following equation:
Figure FDA0003325907120000031
IG=G(IS,KS,A,BT)
wherein, IS、ITAnd IGRespectively representing a source image, a target image, a generated image, BTRepresenting the object background, A representing the attribute, KSAnd KTRepresenting a source gesture and a target gesture, respectively;
4-3, reconstructing a loss objective function;
the reconstructed target loss of the PPAGAN adopts the pixel-level loss of a pedestrian region, and allows the network to pay more attention to the generation and the fusion with the background of a target person, wherein the following formula is specifically adopted:
LI=||MT⊙(IT-IG)||1 (4)
wherein, "indicates an element gradation, MTA binary pedestrian region mask;
4-4. identity cross feature loss function;
in the stage of identity cross training, loss among characteristics is adopted to restrain generation of identity cross pedestrians; specifically, image features of pedestrians in the generated image are extracted through a pre-trained pedestrian re-recognition model, different identity cross feature loss functions are selected according to whether the identities of the pedestrians in the source image are consistent with the identities of the pedestrians in the target image, and the specific formula is as follows:
Figure FDA0003325907120000041
wherein, FSIs an image feature of a pedestrian in the source image, FTIs the image characteristic of a pedestrian in the target image, FGIs the image feature of the pedestrian in the generated image.
6. The privacy protection method for pedestrian identity combined with k anonymity according to claim 5, wherein step 5 trains models and tests data, and comprises the following specific steps:
5-1, preparing a data set, and preprocessing by adopting a public pedestrian data set Market1501 according to the step 1 to obtain a label, a feature, a posture and a pedestrian area mask of an image; taking the training set as a proxy data set and the test set as a data set to be anonymous;
5-2, dividing the data set into a training set and a testing set, wherein the training set and the testing set do not have identity repetition, and training on the training set;
5-3, generating an anonymous pedestrian by using the pedestrian anonymous network trained in the step 3 and the step 4 and combining with pedestrian agent identity mapping;
5-4. to verify the validity of the proposed method, it is compared with the current excellent method to calculate the anonymity rate and the data availability rate.
CN202111261508.3A 2021-10-28 2021-10-28 K-anonymity-combined pedestrian identity privacy protection method Pending CN114036553A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111261508.3A CN114036553A (en) 2021-10-28 2021-10-28 K-anonymity-combined pedestrian identity privacy protection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111261508.3A CN114036553A (en) 2021-10-28 2021-10-28 K-anonymity-combined pedestrian identity privacy protection method

Publications (1)

Publication Number Publication Date
CN114036553A true CN114036553A (en) 2022-02-11

Family

ID=80135635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111261508.3A Pending CN114036553A (en) 2021-10-28 2021-10-28 K-anonymity-combined pedestrian identity privacy protection method

Country Status (1)

Country Link
CN (1) CN114036553A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549883A (en) * 2022-02-24 2022-05-27 北京百度网讯科技有限公司 Image processing method, deep learning model training method, device and equipment
CN114817991A (en) * 2022-05-10 2022-07-29 上海计算机软件技术开发中心 Internet of vehicles image desensitization method and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549883A (en) * 2022-02-24 2022-05-27 北京百度网讯科技有限公司 Image processing method, deep learning model training method, device and equipment
CN114549883B (en) * 2022-02-24 2023-09-05 北京百度网讯科技有限公司 Image processing method, training method, device and equipment for deep learning model
CN114817991A (en) * 2022-05-10 2022-07-29 上海计算机软件技术开发中心 Internet of vehicles image desensitization method and system
CN114817991B (en) * 2022-05-10 2024-02-02 上海计算机软件技术开发中心 Internet of vehicles image desensitization method and system

Similar Documents

Publication Publication Date Title
CN108596039B (en) Bimodal emotion recognition method and system based on 3D convolutional neural network
Nataraj et al. Detecting GAN generated fake images using co-occurrence matrices
CN108537743B (en) Face image enhancement method based on generation countermeasure network
Li et al. Linestofacephoto: Face photo generation from lines with conditional self-attention generative adversarial networks
CN111241958A (en) Video image identification method based on residual error-capsule network
Do et al. Deep neural network-based fusion model for emotion recognition using visual data
CN111680672B (en) Face living body detection method, system, device, computer equipment and storage medium
CN111476806B (en) Image processing method, image processing device, computer equipment and storage medium
CN114036553A (en) K-anonymity-combined pedestrian identity privacy protection method
Rehman et al. Deep learning for face anti-spoofing: An end-to-end approach
CN113705290A (en) Image processing method, image processing device, computer equipment and storage medium
Schwarcz et al. Finding facial forgery artifacts with parts-based detectors
Agarwal et al. Privacy preservation through facial de-identification with simultaneous emotion preservation
Rahman et al. A qualitative survey on deep learning based deep fake video creation and detection method
Zobaed et al. Deepfakes: Detecting forged and synthetic media content using machine learning
Barni et al. Iris deidentification with high visual realism for privacy protection on websites and social networks
CN115424314A (en) Recognizable face anonymization processing method and system
Zeng et al. Occlusion‐invariant face recognition using simultaneous segmentation
Zhang et al. X-Net: A binocular summation network for foreground segmentation
Li et al. Multi-scale joint feature network for micro-expression recognition
CN111737688A (en) Attack defense system based on user portrait
CN115862120A (en) Separable variation self-encoder decoupled face action unit identification method and equipment
CN113205044B (en) Deep fake video detection method based on characterization contrast prediction learning
CN113591797B (en) Depth video behavior recognition method
Saif et al. Aggressive action estimation: a comprehensive review on neural network based human segmentation and action recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination