CN118397115A - Method for reverse analysis and key attribute recovery of security coding image - Google Patents

Method for reverse analysis and key attribute recovery of security coding image Download PDF

Info

Publication number
CN118397115A
CN118397115A CN202410806267.3A CN202410806267A CN118397115A CN 118397115 A CN118397115 A CN 118397115A CN 202410806267 A CN202410806267 A CN 202410806267A CN 118397115 A CN118397115 A CN 118397115A
Authority
CN
China
Prior art keywords
image
sample
training
samples
mixed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410806267.3A
Other languages
Chinese (zh)
Other versions
CN118397115B (en
Inventor
毛云龙
叶竹静
张渊
仲盛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN202410806267.3A priority Critical patent/CN118397115B/en
Publication of CN118397115A publication Critical patent/CN118397115A/en
Application granted granted Critical
Publication of CN118397115B publication Critical patent/CN118397115B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of image security processing, and discloses a method for reversely analyzing and recovering key attributes of a security coding image. Aiming at the weaknesses of the existing image security coding technology, the invention provides an effective coding reverse analysis scheme, recovers key attribute information of original image data from a coded data set, reveals attack paths and weaknesses of the image security coding technology, and provides policy guidance for the security of the image security coding technology. The method has important application prospect in the fields of computer data key attribute, data security analysis and the like.

Description

Method for reverse analysis and key attribute recovery of security coding image
Technical Field
The invention relates to an image key attribute recovery method. More particularly, the method relates to a method for reverse analysis and key attribute recovery of a security coding image, and belongs to the technical field of image security processing.
Background
With the continued development of digitization, the use and sharing of image data is becoming increasingly common, but also carries with it the risk of leakage of potentially critical attributes. Disclosure of sensitive information such as personal characteristics, financial data, medical records and the like contained in the image data may cause problems such as identity theft. The conventional privacy protection method protects image data through strict encryption, which may cause degradation of image quality and loss of semantic information. Therefore, image security coding techniques have evolved that address preserving the integrity and usability of image data while preserving key attributes.
Image security encoding techniques generate an encoded data set by encoding and converting image data. The data set reserves a part of characteristics and structures of the original image data, and can be used for scenes such as model training, data sharing and the like. Meanwhile, in the encoding process, key attributes of original data cannot be revealed in the encoded image through methods such as image mixing and disturbance adding. Mixup and INSTAHIDE are two typical image security coding techniques, the former achieves expansion of the dataset by interpolating multiple images and their labels, while reducing the probability of reverse parsing of the original data features from the encoded dataset. INSTAHIDE, adding extra disturbance in Mixup technology, and turning the highest sign bit of each pixel point of the image with a certain probability, so as to further hide the key attribute information of the original image.
The inventors have found that there is a natural contradictory relationship between the availability and privacy of the encoded data set, i.e. its good performance on the model may reveal more information of the original image. Even though the sensitive information in the encoded data set is intuitively ambiguous or otherwise difficult to understand, in some cases it is still possible to recover the original image or infer the sensitive information by analyzing the features and patterns of the encoded data set.
Based on the above problems, the inventor proposes a reverse parsing and key attribute recovery scheme for an image security coding technology, helps to understand and evaluate the effect of the security coding technology, reveals possible attack paths and weaknesses, verifies the key attribute protection performance of the security coding technology, and further helps to make a corresponding countercheck strategy.
Disclosure of Invention
Aiming at the weaknesses of the prior image security coding technology, the application provides an effective coding reverse analysis scheme, recovers key attribute information of original image data from a coding data set, reveals attack paths and weaknesses of the image security coding technology, and provides policy guidance for the security of the image security coding technology.
The application discloses a method for reverse analysis and restoration of key attributes of a security coding image, which comprises preprocessing of the image, inference of a mixing coefficient, generation of potential codes and reverse analysis for characterizing a disentanglement network. The specific steps are as follows:
step1: image preprocessing, namely receiving an image coding data set subjected to safe coding, and recovering an original undisturbed mixed image data set;
the input data is a INSTAHIDE security encoded image dataset. First, the image data is fed into a preprocessing module that restores the original undisturbed hybrid image by removing the random mask over the image. The preprocessing module generates a plurality of hybrid image datasets using a process of generating a challenge network, inverse random symbol flipping.
Mixup linear interpolation of two or more different samples to create a new training sample for two input samplesAndWhereinAndIs an input feature that is used to determine the input,AndIs the corresponding label, mixup will generate a new sampleWhereinIs a random variable that obeys the Beta distribution. The process of Mixup security coding can be expressed herein as a matrix multiplication form, i.eWhereinIs a matrix, each element representing a mixing weight for a corresponding sample. INSTAHIDE the security code then generates a new occlusion sampleWherein. In this context,Is a matrix for indicating whether the pixels at the corresponding positions are occluded,Is a group consisting ofAndA discrete random integer matrix of elements sampled according to a discrete uniform distribution.
The prior condition of the present invention is that the decoding algorithm has access to what is obtained by secure encoding Mixup or INSTAHIDEEncoded samples, expressed asOr (b). Thus, the reverse resolution for INSTAHIDE can be divided into pairsAndIs the inverse of the two processes of (a), while the preprocessing step is aimed at masking the processesReverse, i.e. find oneMake the following. The invention uses a self-encoder to realize the removal of random symbol overturn by constructing an image pair before and after random symbol overturn, and the specific steps comprise:
Step 1.1: constructing a self-encoder model;
The encoder synthesizes the sample set Mapping to a low-dimensional potential representation. An encoder is a model made up of a series of neural network layers that map input samples into representations in potential space. Decoder slave potential representationRestoring as close as possible to the original mixed sampleIs structured opposite to the encoder.
Step 1.2: constructing training data pairs;
Using a set of mixed samples And a set of synthesized samples after random symbol inversionTraining data pairs are constructed. Each training data pair includes one composite sample and a corresponding hybrid sample. Mixing samplesIs known, and a sample is synthesizedIs obtained by performing a random sign inversion operation on the mixed samples.
Step 1.3: defining a loss function;
Using loss function to measure decoder output Mix the sample with the originalDifferences between them. The loss function may be defined as: Wherein In the case of an encoder,Is a decoder.
Step 1.4: training a self-encoder model;
A stochastic gradient descent optimization algorithm is used to train the self-encoder model by minimizing the above-described loss function. In each training step, samples are input into the self-encoder model, gradients are calculated by the back-propagation algorithm, and model parameters are updated.
Step 1.5: reconstructing a sample;
when the training of the self-encoder model is completed, a new synthesized sample set is obtained Input to the encoder and passed through the decoder to obtain reconstructed samples with random symbol inversion noise effects removedI.e.. In this way, the inverse mapping of random symbol inversion noise is learned from the encoder, thereby canceling the noise to some extent and recovering the original characteristics of the mixed samples.
Step 2: estimating a mixing coefficient, namely inputting a mixed image data set, estimating a mixing coefficient lambda representing the contribution degree of each image in the mixed image in each coded sample label, and calculating the mixing coefficient lambda; by calculation ofThe partial potential codes required to generate the next step can be determined. The method specifically comprises the following steps:
2.1: estimating a private sample mixing coefficient;
For a given mixed tag Each element can be observedIs a numerical value of (2). Assuming that there isNon-zero elements and their corresponding indexes are. In this case, it can be inferred that the mixing coefficient isWherein
2.2: Estimating a common sample mixing coefficient;
Judging whether the sample is a cross-data set or not according to priori knowledge given by a sample coding method, if the sample coding process comprises a private data set and a public data set, the sample coding process belongs to cross-data set sample coding, and the mixing coefficient of the public sample needs to be continuously estimated. Assume that the sum of the private sample coefficients obtained in step 2.1 is The sum of the mixing coefficients of the common samples is
Each mixed sample containsA common sample part for which the mixing coefficients are set to beI.e. the average distribution of the common sample coefficients.
For the case of mixing some unlabeled public samples, if only the mixed labels are accessible, the mixing coefficients of the private and public samples can be estimated by a simple method. Assuming that there is a mixture coefficient of private samples and public samples that isAndEach mixed sample containsA common sample. For the common sample portion, its mixing coefficient can be set to. The reason for this is that the mixing coefficients of each common sample are the same, assuming uniform mixing between the common samples. Therefore, remainPart is divided equally into each common sample, i.e
It is noted that this simple estimation method assumes that the mixing between common samples is uniform and that there are no other complex transformations in the mixing process. In practice, more complex methods may be required to estimate the mixing coefficients, depending on the particular problem and the nature of the data set.
Step 3: classifier training, namely, the mixed image and the label thereof are input into one classifier together for training. The classifier is a deep neural network capable of automatically extracting features in the image and classifying the image according to the features. The method comprises the following specific steps:
step 3.1: preparing mixed image data and corresponding tag data;
step 3.2: constructing a deep neural network classifier model; constructing a classifier model by using the existing VGG-16 network;
Step 3.3: using the blended image as input data and the label as target output, a training sample pair is formed
Step 3.4: by training pairs of samplesOptimizing model parameters: wherein Is a parameter of the model and is a parameter of the model,Is the output of the model and,Is a loss function;
Step 3.5: repeating step 3.4 until the loss reaches a certain threshold or the training round reaches the maximum training round number.
Step 4: generating potential codes; the latent code comprises a supervised part and an unsupervised part, wherein the supervised part is composed of mixed coefficientsAnd (5) determining. The unsupervised part is obtained by splicing a priori knowledge about the secure coding scheme with random noise:
step 4.1: defining random noise latent variables Sampling from a high-dimensional noise distribution (such as normal or uniform distribution);
Step 4.2: definition of latent codes The system comprises a supervised part and an unsupervised part, and controls the attribute of the generated sample;
For the case within an encoded data set, potential encoding Comprises the random part ofAnd a discrete variable for controlling which private image the current image is composed of. Each discrete variable has a value range ofWhereinIs the number of private images;
if the encoded data also contains other public image information, it is possible to use A discrete variable controls which public images the current image contains. Each discrete variable has a value range ofWhereinIs the number of categories of public images;
The potential encoding of the supervision section is used to control the duty cycle of each private image in the blended image. This part comprises A continuous variable representing the intensity of the mixture;
Continuous variable part From parametersThe composition, each parameter changes continuously in a certain range, the concrete range is confirmed according to the demand of the question;
step 4.3: latent variable of random noise And potential codingCombining to form the input of the generatorWhereinRepresenting the concatenation operation of the vectors.
Step 5: characterizing the disentangled and reverse-resolved images; potential encodings and classifiers are input into the characterization disentanglement network for reverse resolution. The characteristic disentanglement network is a special network structure, and can separate the highly entangled characteristic representation and restore the original characteristic from the highly entangled characteristic representation; the method comprises the following specific steps:
step 5.1, the disentangled network adds a classifier based on the original InfoGAN Training by adopting INSTAHIDE coded images, only updating the weight of a convolution layer in the training process, keeping the full-connection layer of a classifier fixed, taking the pre-trained convolution layer as a feature extractor and sharing the convolution layer by a discriminator in a InfoGAN network, keeping the parameters of the convolution layer shared by the discriminator and a generator unchanged in the training of an disentanglement network, and respectively updating the residual parameters of the two networks; the arbiter and classifier of the present invention both use the existing VGG16 network architecture.
Step 5.2: the loss function of the disentangled network comprises three parts: generator loss functionLoss function of discriminatorAnd classifier loss function
The goal of the generator is to generate samples that are as realistic as possible, and are difficult to distinguish by the arbiter. The generator loss function is implemented by minimizing the negative logarithmic expectation of the arbiter probability. The goal of the arbiter is to accurately distinguish between true samples and generated samples. The arbiter loss function consists of the negative logarithmic expectation of the judgment result of the real sample and the generated sample. The loss function of the classifier is used for classifying the generated sample and the real sample so as to ensure that the generated sample has the correct category attribute;
step 5.3: disentanglement network by introducing mutual information items The correlation between the generated samples and the potential encodings is enhanced. This mutual information item controls the correlation of the generated samples with the potential codes in the overall optimization objective;
Step 5.4: overall loss function of disentangled network Integrating the generator loss, the arbiter loss, the mutual information item and the classifier loss, and training the network by minimizing the loss of the generator and maximizing the loss of the arbiter;
Step 5.5: after convergence of the disentanglement network training, samples with specific properties or characteristics may be generated by adjusting the randomness information and the deterministic information input to the generator. By controlling the value of the potential code, a private sample with specific properties can be generated, and by adjusting the value of the continuous variable, a reconstructed image closely resembling the private image can be generated.
The application has at least the following technical effects or advantages:
1. the implementation method for reverse analysis and key attribute recovery of the image security coding has wide application scenes, and is particularly suitable for security assessment of a sample coding scheme and vulnerability detection of confusion technology.
2. The reverse analysis scheme adopted by the invention restores the image data which is safely encoded, helps to evaluate the risk of the key attribute of the data, adopts corresponding protection measures, and has important application prospects in the fields of computer data key attribute, data security analysis and the like.
Drawings
FIG. 1 is a diagram of image security encoding across data sets in an embodiment of the present invention.
Fig. 2 is a schematic diagram of a reverse parsing flow for security coding in an embodiment of the invention.
Fig. 3 is a schematic diagram illustrating the principle of disentanglement in an embodiment of the present invention.
FIG. 4 is a schematic diagram of a potential code configuration in an embodiment of the present invention.
Fig. 5 is a schematic diagram showing the structure of an disentangled network according to an embodiment of the present invention.
Detailed Description
In order to better understand the above technical solutions, the following detailed description will refer to the accompanying drawings and specific embodiments.
Examples: the embodiment provides a method for reverse analysis and key attribute recovery of a security coding image. FIG. 1 illustrates a cross-dataset image security encoding flow contemplated by the present invention, whereinThe mixing coefficient is represented by a combination of coefficients,Representing random sign inversion, i.e. the most significant bits for each pixel of the blended imageIs flipped (0 flipped to 1,1 flipped to 0). The whole synthesis process is expressed asThe mixing process is thatThe preprocessing step exploits a self-encoder model for noise removal of random symbol inversions of the encoded image.
As shown in fig. 2 and 3, the characteristic that the hybrid image is highly entangled on the middle representation of the model is utilized, a characteristic disentanglement network is designed by combining known priori knowledge, and the details and contents of the private image can be approximately recovered. The following will take a secure coding scheme INSTAHIDE as an example to describe how the present invention can recover key attributes of the coded image.
Step 1: image and processing
1.1, Constructing a self-encoder model, wherein the encoder part adopts a convolutional neural network structure and is responsible for randomly turning a group of image sequences after symbol overturn processingMapping to a low-dimensional latent feature space. The decoder part adopts a mirror-symmetrical design with the encoder structure, and aims to reconstruct a mixed image which is as close to the original clear noise-free as possible from the potential characteristics
1.2 Constructing training data pairs, using the cropped ImageNet public data set and the selected private data set to make training data sets, here a hybrid sampleIs a series of true mixed images, and synthesizes samplesThen it is generated by a random symbol inversion operation that simulates the mixed samples. Each pair of training data includes a composite image with noise added and its corresponding noise-free raw image.
1.3 Defining loss function, quantizing denoising reconstructed image output by decoderAnd the actual noiseless blended image M (X). The mean square error loss function is selected here, and is achieved by calculating the average pixel level square difference of all images in the whole training batch: Wherein AndRepresenting the encoding and decoding processes, respectively.
And 1.4, training a self-encoder model, and performing a random gradient descent method by using an Adam optimizer to train the self-encoder model, wherein the model parameters are adjusted through continuous iteration, so that the loss function value is continuously reduced. In each iteration, the model receives as input a batch of random sign-flipped images, calculates the loss by forward propagation, calculates the gradient by backward propagation, and updates the model weights.
1.5, Reconstructing samples, after convergence from encoder model training, can be applied to a new image set with random sign-inversion noise. The potential representation is extracted by the encoder and then reconstructed via the decoder to obtain a noise-effect-removed image sequence. In this way, the inverse mapping of random symbol inversion noise is learned from the encoder, thereby canceling the noise to some extent and recovering the original characteristics of the mixed samples.
Step2: inferring mixing coefficients
2.1 In a blended image, a blended set of pixel labels is assumedWhereinRepresenting the labels in the image. Assuming that it is known to originate from the originalA private image whose pixel values have non-zero contributions in the blended image. For example, if in a tagThe pixel index of each salient region isThen, the mixing coefficients corresponding to these prominent colorsIt is possible to observe the corresponding label value in the blended image, i.eRepresents the firstThe extent of influence of the individual private images in the blended image.
2.2 For inclusion in blended imagesPortions of the common sample image. Since the specific contributions of these common samples are not directly known, assuming they are uniformly distributed throughout the blended image, the total blending coefficient of all common samples can be calculatedAverage distribution of common samples to each of the participating mixes, i.e. the mixing coefficient of each common sample is. Here, theRepresenting the sum of the blending coefficients of all private sample images. For example, in a mixed image, the main image is two private photosAndTheir mixing coefficients are respectivelyAnd. In addition, there areThe common texture pictures C, D and E are uniformly mixed together to form the background. At this time, the blending coefficient of each common texture picture is set toMeaning that under the current simplified model, the common sample can be considered to directly contribute to the final blended image
It should be noted that this simple estimation method of the present embodiment assumes that the mixing between common samples is uniform and that there are no other complex transformations in the mixing process. In practice, more complex methods may be required to estimate the mixing coefficients, depending on the particular problem and the nature of the data set.
Step 3: classifier training
3.1 In the image classification task, each blended image may contain private sample portions from different categories. For example, an image may contain both frog and deer, with corresponding labelsWhereinRepresenting the proportion of frogs in the image,Representing the proportion of deer.
And 3.2, constructing a multi-label deep neural network classifier, which comprises capturing local features by a convolution layer, integrating global features by a full-connection layer, and generating probability distribution of each category in an image at an output layer by using a softmax function.
3.3 Constructing training sample pairs. For example, an image of a mixed frog and deer is used as inputThe corresponding label y is
3.4, Optimizing algorithms such as gradient descent and the like according to the loss functionPerforming parametersThe goal is to make the probability distribution of model predictions as close as possible to the true label distribution, i.eWhereinIs the model pairProbability vectors for each class duty cycle of the individual sample predictions.
And 3.5, repeatedly executing until the loss function of the model on the training set falls below a set threshold value or reaches a preset maximum training frequency.
After training, the model can better automatically identify and quantify the existence proportion of each category from the mixed image.
Step 4: latent code generation
For example, there is a mixed image containing oneTo different private imagesPublic image, private image data set size isThe number of categories of public images is
4.1 Sampling from Gaussian white noise distribution or other specified distribution to obtain random noise latent variableThe dimensions of which depend on the structural requirements of the generator. Definition of latent codesIt is divided into two main parts: discrete portions and continuous portions. Discrete portions: for private image portions, there areDiscrete variables, each representing a certain picture in the private image library, the value range is. For public image parts, there areDiscrete variables representing which type of public image mixture is selected, and the range of values
4.2 For the selected private image, a set of continuous variables is set as shown in FIG. 4There are 4 total, corresponding to the proportion of each private image in the final composite image. Such asRepresenting the second graph not participating in the mixing, while the first graph occupiesThe fourth graph occupiesThe third graph occupies. Latent variable of random noiseAnd potential codingThe discrete and continuous two parts of the line are spliced) Form the input of the generator
4.3 Integrating the potential vectorsInput into a generator network, the generator generates a new image which is mixed with specified private and public image components according to the coding information and combines noise variables, and the ratio of each part of the image in the new image accords with given continuous variableIs a numerical value of (2).
Step 5: characterization of disentangled network reverse resolution
In training the disentanglement network, as shown in fig. 5, the generator is first initializedDistinguishing deviceClassifierWeight parameters of (c). Next, for each training batch, the codes are randomly extracted from the potential space and these combinations are fed into the generatorThereby generating a new set of image samples
Subsequently, a discriminator is calculated using the extracted real data samplesLoss for real dataThe loss typically employs cross entropy or other suitable discriminative task loss function. Next, the loss of the discriminator to the samples generated by the generator is also calculated. Combining the two losses to form an overall arbiter lossThe arbiter parameters are then updated to maximize this total loss.
On the basis, the classifier parameters are updated, a part of parameters are fixed in the process, and the part of parameters shared with the discriminator are correspondingly adjusted. Meanwhile, to optimize the performance of the generator, we need to calculate the generator penalty, which includes the penalty of the arbiter on the output of the generated samples, as well as the penalty reflecting the mutual information between the potential encodings and the generated samples. Total loss of generatorBoth components are included and the generator parameters are updated accordingly in order to minimize this overall loss.
Throughout the training process, the quality of the generated image is periodically assessed in terms of metrics such as Structural Similarity Index (SSIM) and learned perceived image characterization distance (LPIPS). Once the model performance reaches a preset convergence criterion or the number of training rounds reaches an upper limit, the training process is terminated. After training is completed, the private samples can be restored by combining the trained complete generator with the specific potential codes.
The foregoing is merely a preferred embodiment of the invention, and it should be noted that modifications could be made by those skilled in the art without departing from the principles of the invention, which modifications would also be considered to be within the scope of the invention.

Claims (8)

1. The method for reverse analysis and recovery of key attributes for security coding images is characterized by comprising the following steps:
image preprocessing, namely receiving an image coding data set subjected to safe coding, and recovering an original undisturbed mixed image data set;
estimating a mixing coefficient, namely inputting a mixed image data set, estimating a mixing coefficient lambda representing the contribution degree of each image in the mixed image in each coded sample label, and calculating the mixing coefficient lambda;
Training a classifier, training a mixed image and a label thereof which are input into the classifier, automatically extracting characteristics in the image, and classifying the image according to the characteristics;
generating potential codes, namely generating potential codes consisting of a supervised part determined by a mixing coefficient lambda and an unsupervised part obtained by splicing prior knowledge of a secure coding scheme and random noise;
And (3) representing the disentangled and reversely analyzing the image, inputting the potential codes and the classifier into a representation disentangled network for reversely analyzing, separating the highly entangled characteristic representation, and recovering the original characteristic from the highly entangled characteristic representation.
2. The method for reverse parsing and recovering key attributes for security coded images according to claim 1, wherein: in the image preprocessing step, if the encoded image is subjected to random symbol inversion, the encoded image is firstly subjected to reverse removal of random symbol inversion by adopting a self-encoder, and then a mixed image data set is obtained, and the steps of constructing and training the self-encoder comprise:
1.1 construction of self-encoder model
The encoder synthesizes the sample setMapping to a low-dimensional potential representationThe encoder is a model made up of a series of neural network layers, mapping input samples to representations in potential space, and the decoder is derived from the potential representationsRestoring near-original mixed samplesIs of opposite structure to the encoder;
1.2 constructing training data pairs
Using a set of mixed samplesAnd a set of synthesized samples after random symbol inversionConstructing training data pairs, each training data pair comprising a composite sample and a corresponding mixed sample, the mixed samplesIs known, the synthetic sampleIs obtained by carrying out random symbol overturning operation on the mixed sample;
1.3 defining a loss function
Using loss function to measure decoder outputMix the sample with the originalThe difference between, the loss function is defined as: Wherein In the case of an encoder,Is a decoder;
1.4 training the self-encoder model
Training the self-encoder model by minimizing the loss function using a random gradient descent optimization algorithm, inputting samples into the self-encoder model in each training step, calculating gradients by a back propagation algorithm and updating model parameters;
1.5, reconstruction of samples
When the training of the self-encoder model is completed, a new synthesized sample set is obtainedInput to the encoder and passed through the decoder to obtain reconstructed samples with random symbol inversion noise effects removedI.e.
3. The method for reverse parsing and recovering key attributes for security coded images according to claim 1, wherein: the step of estimating the mixing coefficient includes:
2.1 private sample mixing coefficient estimation
For a given mixed tagObserve each elementIs a numerical value of (2); assuming that there isNon-zero elements and their corresponding indexes areThen infer the mixing coefficient asWherein
2.2 Public sample mixing coefficient estimation
Judging whether the sample is a cross-data set or not according to priori knowledge given by a sample coding method, if the sample coding process comprises a private data set and a public data set, the sample coding process belongs to cross-data set sample coding, the mixing coefficient of the public sample needs to be continuously estimated, and the sum of the private sample coefficients obtained in the step 2.1 is assumed to beThe sum of the mixing coefficients of the common samples is
Each mixed sample containsA common sample part for which the mixing coefficients are set to beI.e. the average distribution of the common sample coefficients.
4. The method for reverse parsing and recovering key attributes for security coded images according to claim 1, wherein: the classifier training step comprises the following steps:
3.1, preparing mixed image data and corresponding label data;
3.2, constructing a deep neural network classifier model;
3.3, using the mixed image as input data, and the label as target output to form training sample pair
3.4 Through training sample pairsOptimizing model parameters: wherein Is a parameter of the model and is a parameter of the model,Is the output of the model and,Is a loss function;
And 3.5, repeating the step 3.4 until the loss reaches a certain threshold value or the training round reaches the maximum training round number.
5. The method for reverse parsing and recovering key attributes for security coded images according to claim 1, wherein: the potential code generating step includes
4.1 Defining random noise latent variableSampling from high-dimensional noise distribution;
4.2 definition of latent codes Comprises a supervised part and an unsupervised part, wherein the supervised part is used for controlling the attribute of the generated sample and consists of a mixing coefficientDetermining, wherein the unsupervised part is obtained by splicing priori knowledge of a safety coding scheme with random noise;
4.3 latent variable of random noise And potential codingCombining to form the input of the generatorWhereinRepresenting the concatenation operation of the vectors.
6. The method for reverse parsing and recovering key attributes for security coded images according to claim 5, wherein: in said step 4.2, said potential encodingComprises the random part ofA discrete variable for controlling the current image composed of several private images, wherein the value range of each discrete variable isWhereinIs the number of private images.
7. The method for reverse parsing and recovering key attributes for security coded images according to claim 5, wherein: in step 4.2, if the encoded data also contains other public image information, thenThe discrete variables are used for controlling which public images the current image contains, and the value range of each discrete variable isWhereinIs the number of categories of public images.
8. The method for reverse parsing and recovering key attributes for security coded images according to claim 1, wherein: the reverse parsing step for characterizing the disentangled network comprises the following steps:
5.1 disentangled network adds a classifier based on original InfoGAN Training by adopting INSTAHIDE coded images, only updating the weight of a convolution layer in the training process, keeping the full-connection layer of a classifier fixed, taking the pre-trained convolution layer as a feature extractor and sharing the convolution layer by a discriminator in a InfoGAN network, and respectively updating the residual parameters of the two networks by keeping the parameters of the convolution layer shared by the discriminator and a generator unchanged in the training of the disentanglement network;
5.2 loss function of the disentangled network comprises three parts:
Generator loss function The generator loss function is implemented by minimizing the negative logarithmic expectation of the arbiter probability;
Loss function of discriminator The discriminator loss function consists of negative logarithm expectations of a true sample and a judgment result of a generated sample;
a classifier loss function for classifying the generated samples and the real samples to ensure that the generated samples have the correct class attributes;
5.3 disentanglement network by introducing mutual information items Enhancing the correlation between the generated sample and the potential code, wherein the mutual information item controls the correlation between the generated sample and the potential code in the overall optimization target;
5.4 Overall loss function of disentangled network Integrating the generator loss, the arbiter loss, the mutual information item, and the classifier loss, training the network by minimizing the loss of the generator and maximizing the arbiter:
And 5.5, after the disentanglement network training is converged, generating samples with specific properties or characteristics by adjusting randomness information and certainty information input into a generator, generating private samples with specific properties by controlling the value of potential codes, and generating a reconstructed image closely similar to the private image by adjusting the value of continuous variables.
CN202410806267.3A 2024-06-21 2024-06-21 Method for reverse analysis and key attribute recovery of security coding image Active CN118397115B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410806267.3A CN118397115B (en) 2024-06-21 2024-06-21 Method for reverse analysis and key attribute recovery of security coding image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410806267.3A CN118397115B (en) 2024-06-21 2024-06-21 Method for reverse analysis and key attribute recovery of security coding image

Publications (2)

Publication Number Publication Date
CN118397115A true CN118397115A (en) 2024-07-26
CN118397115B CN118397115B (en) 2024-10-01

Family

ID=91985079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410806267.3A Active CN118397115B (en) 2024-06-21 2024-06-21 Method for reverse analysis and key attribute recovery of security coding image

Country Status (1)

Country Link
CN (1) CN118397115B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8879813B1 (en) * 2013-10-22 2014-11-04 Eyenuk, Inc. Systems and methods for automated interest region detection in retinal images
CN116994040A (en) * 2023-07-12 2023-11-03 华能(浙江)能源开发有限公司清洁能源分公司 Image recognition-based deep sea wind power generation PQDs (pulse-height distribution system) classification method and system
CN117915015A (en) * 2024-02-01 2024-04-19 深圳市通程软件开发有限公司 Invisible image anti-counterfeiting technical system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8879813B1 (en) * 2013-10-22 2014-11-04 Eyenuk, Inc. Systems and methods for automated interest region detection in retinal images
CN116994040A (en) * 2023-07-12 2023-11-03 华能(浙江)能源开发有限公司清洁能源分公司 Image recognition-based deep sea wind power generation PQDs (pulse-height distribution system) classification method and system
CN117915015A (en) * 2024-02-01 2024-04-19 深圳市通程软件开发有限公司 Invisible image anti-counterfeiting technical system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JORGE SANCHEZ等: "image classification with the fisher vector: theory and practice", INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 105, 12 June 2013 (2013-06-12), pages 222 - 245, XP035362231, DOI: 10.1007/s11263-013-0636-x *
张敏: "基于视觉隐私安全机制与时序自适应网络的居家视频行为识别系统设计与实现", 中国优秀硕士学位论文全文数据库工程科技II辑, no. 2, 15 February 2023 (2023-02-15), pages 038 - 2599 *

Also Published As

Publication number Publication date
CN118397115B (en) 2024-10-01

Similar Documents

Publication Publication Date Title
Kos et al. Adversarial examples for generative models
Amerini et al. Exploiting prediction error inconsistencies through LSTM-based classifiers to detect deepfake videos
CN111523668B (en) Training method and device of data generation system based on differential privacy
CN113658051A (en) Image defogging method and system based on cyclic generation countermeasure network
Zhang et al. Hierarchical density-aware dehazing network
Peng et al. A robust coverless steganography based on generative adversarial networks and gradient descent approximation
CN110071798B (en) Equivalent key obtaining method and device and computer readable storage medium
Fan et al. Deep adversarial canonical correlation analysis
CN114330736A (en) Latent variable generative model with noise contrast prior
Wei et al. Generative steganographic flow
Wani et al. Deep learning based image steganography: A review
CN116978105A (en) AI face-changing image anomaly detection method
CN118196231A (en) Lifelong learning draft method based on concept segmentation
CN114494387A (en) Data set network generation model and fog map generation method
Abdollahi et al. Image steganography based on smooth cycle-consistent adversarial learning
CN118397115B (en) Method for reverse analysis and key attribute recovery of security coding image
CN116647391A (en) Network intrusion detection method and system based on parallel self-encoder and weight discarding
CN116091891A (en) Image recognition method and system
Shankar et al. Moderate embed cross validated and feature reduced Steganalysis using principal component analysis in spatial and transform domain with Support Vector Machine and Support Vector Machine-Particle Swarm Optimization
CN116074065A (en) Longitudinal federal learning attack defense method and device based on mutual information
CN115908094A (en) Self-supervision lossless zero-watermark algorithm based on feature comparison learning
Rajpal et al. Fast digital watermarking of uncompressed colored images using bidirectional extreme learning machine
Luo et al. Reversible adversarial steganography for security enhancement
García-González et al. Deep autoencoder architectures for foreground object detection in video sequences based on probabilistic mixture models
Wu et al. General generative model‐based image compression method using an optimisation encoder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant