CN112418332A - Image processing method and device and image generation method and device - Google Patents
Image processing method and device and image generation method and device Download PDFInfo
- Publication number
- CN112418332A CN112418332A CN202011349027.3A CN202011349027A CN112418332A CN 112418332 A CN112418332 A CN 112418332A CN 202011349027 A CN202011349027 A CN 202011349027A CN 112418332 A CN112418332 A CN 112418332A
- Authority
- CN
- China
- Prior art keywords
- image
- disturbance
- feature map
- feature
- source image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 101
- 238000003672 processing method Methods 0.000 title abstract description 18
- 230000002452 interceptive effect Effects 0.000 claims abstract description 9
- 238000013528 artificial neural network Methods 0.000 claims description 64
- 238000012545 processing Methods 0.000 claims description 39
- 238000012549 training Methods 0.000 claims description 39
- 238000000605 extraction Methods 0.000 claims description 36
- 238000010586 diagram Methods 0.000 claims description 31
- 238000004590 computer program Methods 0.000 claims description 9
- 230000004927 fusion Effects 0.000 claims description 5
- 230000003042 antagnostic effect Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 30
- 230000000694 effects Effects 0.000 description 6
- 239000013598 vector Substances 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The present disclosure provides an image processing method and apparatus, and an image generation method and apparatus, wherein the image processing method includes: acquiring a source image and a target image for interfering with the feature recognition of the source image; respectively extracting features of a source image and a target image to obtain a first feature map and a second feature map; obtaining a disturbance feature map based on the first feature map, the second feature map and a disturbance limiting item; and fusing the disturbance characteristic graph and the source image to generate a confrontation image corresponding to the source image. According to the method and the device, the characteristics of the relevant target image and the characteristics of the source image can be learned at the same time through the limitation of the disturbance limiting item, after the generated disturbance characteristic graph and the source image are fused, the confrontation image covering the characteristics of the target image can be obtained, the whole operation process is simple, and the generation efficiency is high.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and an image generation method and apparatus.
Background
With the wide application of neural networks, the safety and stability of the neural networks are also more and more emphasized. The confrontation sample of the neural network is a sample formed by adding disturbance which is invisible to naked eyes or does not influence the overall appearance in the original data set. The countermeasure sample may cause the neural network model to give a classification result different from the original sample with a higher degree of confidence, and may cause, for example, a face recognition model, a license plate recognition model, an image classifier, and the like to generate erroneous outputs.
In order to enable the generated countermeasure samples to be better migrated to other network platforms, multiple gradient calculations and iterations are often required, and thus the time cost is large.
Disclosure of Invention
The embodiment of the disclosure provides an image processing method and device and an image generation method and device.
In a first aspect, an embodiment of the present disclosure provides a method for image processing, where the method includes:
acquiring a source image and a target image, wherein the target image is used for interfering the feature recognition of the source image;
respectively extracting the features of the source image and the target image to obtain a first feature map and a second feature map;
obtaining a disturbance feature map based on the first feature map, the second feature map and a disturbance limiting item;
and fusing the disturbance characteristic graph and the source image to generate a confrontation image corresponding to the source image.
By adopting the image processing method, the characteristics of the source image and the target image for interfering the characteristic recognition of the source image can be respectively extracted under the condition that the source image and the target image for interfering the characteristic recognition of the source image are acquired, so that the disturbance characteristic diagram can be acquired based on the first characteristic diagram, the second characteristic diagram and the disturbance limiting item which are acquired by extraction.
The disturbance characteristic map can be obtained by processing the first characteristic map and the second characteristic map under the limitation of the disturbance limiting term. The characteristics of the relevant target image and the characteristics of the source image can be simultaneously learned through the limitation of the disturbance limiting item, so that after the generated disturbance characteristic graph and the source image are fused, a confrontation image covering the characteristics of the target image can be obtained, the whole operation process is simple, and the generation efficiency is high.
In a possible implementation, the confrontation image corresponding to the source image is generated by a trained neural network; the neural network comprises a first encoder and a second encoder;
respectively extracting the features of the source image and the target image to obtain a first feature map and a second feature map, wherein the steps of:
performing feature extraction on the source image by using the first encoder to obtain a first feature map; and the number of the first and second groups,
and performing feature extraction on the target image by using the second encoder to obtain a second feature map.
In the embodiment of the disclosure, feature extraction can be respectively performed on the source image and the target image by using the first encoder and the second encoder which are arranged in parallel to obtain the first feature map and the second feature map, and the generation efficiency can be further improved by parallel feature extraction operation.
In one possible embodiment, the disturbance limiting term includes a limiting term selected from a plurality of candidate disturbance limiting terms; obtaining a disturbance feature map based on the first feature map, the second feature map and a disturbance limiting term, wherein the obtaining of the disturbance feature map comprises:
splicing the first characteristic diagram and the second characteristic diagram to obtain a spliced third characteristic diagram;
and extracting disturbance information of the third feature map based on the selected disturbance limiting item to obtain the disturbance feature map.
The disturbance feature map may be obtained by performing disturbance extraction on a third feature map under the limitation of a disturbance limitation term, where the third feature map may be a stitching result of two feature maps (i.e., the first feature map and the second feature map) extracted from the source image and the target image which has an interference effect on the image. The influence of the characteristics of the source image can be reduced while more characteristics of the target image can be learned through the limitation of the disturbance limiting item, so that after the generated disturbance characteristic graph and the source image are fused, a confrontation image which is similar to the source image in appearance and actually covers the characteristics of the target image can be obtained, and the method is more practical.
In a possible embodiment, the confrontation image corresponding to the source image is generated by a trained neural network, the neural network comprising a decoder; extracting disturbance information of the third feature map based on the selected disturbance limiting item to obtain the disturbance feature map, wherein the extracting includes:
and based on the selected disturbance limiting item, carrying out disturbance information extraction on the third feature map by using the decoder to obtain the disturbance feature map.
In one possible implementation, the decoder comprises a first sub-decoder and a second sub-decoder; the extracting disturbance information of the third feature map by using the decoder based on the selected disturbance limiting item to obtain the disturbance feature map includes:
decoding the third feature graph by using the first sub-decoder to obtain a plurality of decoded feature sub-graphs, and decoding the selected disturbance limiting item by using the second sub-decoder to obtain a disturbance weight matched with each feature sub-graph in the plurality of feature sub-graphs;
and generating the disturbance feature map based on the plurality of feature subgraphs and the disturbance weight.
In the embodiment of the disclosure, the first sub-decoder and the second sub-decoder may be respectively used to decode the third feature graph and decode the selected disturbance limiting item, the disturbance limiting item may be decoded to perform weight distribution on each feature sub-graph decoded from the third feature graph to generate a disturbance feature graph, the disturbance feature graph may be a feature graph conforming to the disturbance limiting item, the influence degree of each disturbance weight on each decoded feature sub-graph of each dimension may be represented to a certain degree, for different third feature graphs, the feature sub-graphs and the disturbance weights obtained by corresponding analysis are also different, and here, the generated disturbance feature graph may be made to conform to the disturbance limiting more by adaptive adjustment.
In a possible implementation, in the case that the source image is a source image sample and the target image is a target image sample, the neural network is trained according to the following steps:
and performing at least one round of training on a neural network to be trained based on the source image sample and the target image sample until the image similarity between the confrontation image sample output by the neural network and the target image sample is greater than a preset similarity, and training to obtain the neural network.
Here, a round of training may be performed on the target image sample and the source image sample, so that a neural network may be obtained through at least one round of model training, where the condition of model cutoff may be that the image similarity between the countermeasure image sample and the target image sample output by the neural network is greater than a preset similarity, and since the countermeasure image sample is fused with the feature information of the target image sample, under the condition that the image similarity between two images is sufficiently large, an object in the countermeasure image sample may be easily mistakenly recognized as an object in the target image sample, thereby achieving an interference effect in the feature recognition process.
In a second aspect, an embodiment of the present disclosure further provides a method for generating an image, where the method includes:
acquiring an original image;
a countermeasure image corresponding to the original image is generated using the method of image processing according to the first aspect and any of its various embodiments.
In one possible embodiment, the method further comprises:
presenting the confrontation image through a display device.
In some embodiments, the original image and the counter image comprise face images.
In a third aspect, an embodiment of the present disclosure further provides an apparatus for image processing, where the apparatus includes:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a source image and a target image, and the target image is used for interfering the feature recognition of the source image;
the extraction module is used for respectively extracting the features of the source image and the target image to obtain a first feature map and a second feature map;
the generating module is used for obtaining a disturbance feature map based on the first feature map, the second feature map and a disturbance limiting item;
and the fusion module is used for fusing the disturbance characteristic graph and the source image to generate a confrontation image corresponding to the source image.
In a fourth aspect, an embodiment of the present disclosure further provides an apparatus for image generation, where the apparatus includes:
the acquisition module is used for acquiring an original image;
a generating module, configured to generate a confrontation image corresponding to the source image by using the method for image processing according to the first aspect and any of its various embodiments.
In a fifth aspect, an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the method of image processing according to the first aspect and any of its various embodiments or the steps of the method of image generation according to the second aspect.
In a sixth aspect, the disclosed embodiments also provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by an electronic device, and the electronic device executes the steps of the method for image processing according to the first aspect and any of its various embodiments or the steps of the method for image generation according to the second aspect.
For the description of the effects of the above apparatus, electronic device, and computer-readable storage medium, reference is made to the description of the above corresponding method, which is not repeated herein.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of a method for image processing according to a first embodiment of the disclosure;
fig. 2 is a schematic diagram illustrating an application of a method for image processing according to a first embodiment of the present disclosure;
fig. 3 is a schematic diagram illustrating an apparatus for image processing according to a second embodiment of the disclosure;
fig. 4 is a schematic diagram illustrating an apparatus for generating an image according to a second embodiment of the disclosure;
fig. 5 shows a schematic diagram of an electronic device provided in a third embodiment of the present disclosure;
fig. 6 shows a schematic diagram of another electronic device provided in the third embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of embodiments of the present disclosure, as generally described and illustrated herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Research shows that the method for generating the countermeasure sample based on the gradient is provided in the related art, the method induces the network to carry out misclassification on the generated picture by adding the increment in the gradient direction, and in order to enable the generated countermeasure sample to be better transferred to other network platforms, multiple gradient calculations and iterations are often required, so that the time cost is high.
Based on the research, the present disclosure provides an image processing method and apparatus, and an image generation method and apparatus, which have high generation efficiency of a countermeasure image.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, a method for image processing disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the method for image processing provided in the embodiments of the present disclosure is generally an electronic device with certain computing capability, and the electronic device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the method of image processing may be implemented by a processor calling computer readable instructions stored in a memory.
The following describes a method for processing an image provided by an embodiment of the present disclosure.
Example one
Referring to fig. 1, which is a flowchart of a method for processing an image according to an embodiment of the present disclosure, the method includes steps S101 to S104, where:
s101, acquiring a source image and a target image, wherein the target image is used for interfering the feature recognition of the source image;
s102, respectively extracting the features of a source image and a target image to obtain a first feature map and a second feature map;
s103, obtaining a disturbance characteristic diagram based on the first characteristic diagram, the second characteristic diagram and the disturbance limiting item;
and S104, fusing the disturbance feature graph and the source image to generate a confrontation image corresponding to the source image.
Here, in order to facilitate understanding of the method of image processing provided by the embodiments of the present disclosure, first, an application scenario of the method of image processing is briefly described below. The image processing method can be applied to the field of privacy protection, for example, when a user does not want a photo uploaded to a social media to be recognized by a face recognition system, the confrontation image generated by the method interferes with the face recognition system, so that the purpose of privacy protection is achieved. In addition, the embodiments of the present disclosure can also be applied to other related technical fields requiring counterimage, and are not described herein again.
The countermeasure image corresponding to the source image in the embodiment of the disclosure may be obtained by fusing the disturbance feature map and the source image, wherein the disturbance feature map may characterize the degree of appearance of the features of the target image in the source image, which may interfere with the source image. In order that a certain degree of visualization may be presented, in an embodiment of the present disclosure, features of the related source image (corresponding to the first feature map) and features of the target image (corresponding to the second feature map) may be simultaneously learned based on the disturbance restriction term to obtain the disturbance feature map described above.
In some embodiments, the first feature map and the second feature map may be spliced first, and then the perturbation information of the spliced third feature map is extracted through the perturbation limiting term, so as to obtain the perturbation feature map.
The third feature map comprises the features of the source image and the features of the target image, and the features of the two images are interacted, so that under the condition that the determined disturbance feature map is fused with the source image, the source image can be ensured to be sufficiently disturbed, the disturbance process can be interfered through disturbance limitation to prevent an excessive disturbance presenting effect, the generated countermeasure image can be similar to the source image in appearance, the features of the multiple target images are covered, and the features can greatly disturb the recognition effect of a subsequent face recognition system and the like.
The source image may be an image that needs to be subjected to countermeasure processing in each application field, and the image may be a human face image, a vehicle image, or other images. The target image can be an image which has an interference effect on the feature recognition of the source image, and the recognition result of the source image can be interfered through the feature extraction operation of the image.
In order to interfere with the feature recognition of the source image, in a specific application, the adopted target image may be an image with the same size and the same type as the source image, and the corresponding target image may be selected based on a specific image element in the source image. Taking the face image as the source image as an example, if the image element in the face image indicates a woman, the target image that is also a woman may be used for feature interference. In this way, when the face image 1 is used as a source image and the face image 2 is used as a target image, the face image 1 is erroneously determined as an identity feature to which the face image 2 points in the process of recognition of the face image 1.
It should be noted that, in the embodiment of the present disclosure, the feature extraction operation on the source image and the target image may be determined based on a method of related image processing, and may also be obtained by using a trained encoder, and in consideration of that feature extraction performed by using the trained encoder can mine deeper image features of the image, this may provide rich data support for the generation of a subsequent confrontation image, and based on this, the embodiment of the present disclosure may implement the feature extraction on the source image and the target image by using an encoder encoding manner.
The first feature map extracted by feature extraction of the source image and the second feature map extracted by feature extraction of the target image can be feature maps with the same dimension, so that the two feature maps can be spliced on the selected specific dimension. Similarly, the dimensions of the disturbance feature map generated by the embodiment of the present disclosure and the source image may also be the same, so that the disturbance feature map may be used as a layer to be fused, and the countermeasure image corresponding to the source image is generated by adding the layer of the disturbance feature map to the source image.
It should be noted that, in the practical application process, there may be a case where the extracted dimensions of the first feature map and the second feature map are different, and here, in order to perform feature map stitching better, an interpolation operation may be performed first, for example, the first feature map with a higher dimension may be interpolated into a space with the same dimension as that of the first feature map, so that the two feature maps reach the same dimension, and then the stitching operation is performed.
In the embodiment of the present disclosure, the confrontation image corresponding to the source image may be generated based on a trained neural network, where the neural network may perform feature extraction on the source image and the target image through two encoders (i.e., a first encoder and a second encoder), and in a case that the first feature map and the second feature map are extracted correspondingly, a stitching operation of the first feature map and the second feature map may be performed.
The first mosaic image and the second mosaic image can be three-dimensional feature images, and when the mosaic operation is specifically performed, the combination of features can be performed on three dimensions respectively, so that the mosaic result of the two feature images can be obtained.
In this way, the decoder included in the neural network extracts the disturbance information of the third feature map obtained by the splicing operation to obtain a disturbance feature map, and after the disturbance feature map is fused with the source image, a confrontation image corresponding to the source image can be generated, as shown in fig. 2.
The two encoders are used as encoders arranged in parallel, and in practical application, the two encoders can be obtained by training convolutional neural networks with the same network structure.
The network structure of the convolutional neural network is illustrated by taking a source image as an example. For example, for a 112 × 3 dimensional source image with 112, and 3 channels, respectively, the width, height, and number of channels of the image may be extracted using a network structure of three convolutional layers.
Under the conditions that the convolution kernel size corresponding to the first convolution layer is 7 × 7, the convolution kernel number is 64, the convolution step is 1, the convolution kernel size corresponding to the second convolution layer is 4 × 4, the convolution kernel number is 128, the convolution step is 2, the convolution kernel size corresponding to the third convolution layer is 4 × 4, the convolution kernel number is 256, and the convolution step is 2, the source image is firstly subjected to the first convolution layer to obtain a 112 × 64 convolution feature map, then is subjected to the second convolution layer to obtain a 56 × 128 convolution feature map, and finally is subjected to the third convolution layer to obtain a 28 × 256 convolution feature map, wherein the convolution feature map can be used as the first feature map obtained by performing feature extraction on the source image.
In the embodiment of the present disclosure, under the condition that the first feature map and the second feature map are obtained by extraction and the third feature map is spliced by using the above feature extraction method, the disturbance information of the third feature map may be extracted by using a decoder based on the selected disturbance limiting item. The decoder can be used for decoding the disturbance characteristics which accord with the disturbance limiting item from the third characteristic diagram for the selected disturbance limiting item, and adding the decoded disturbance characteristics to the source image to obtain the confrontation image.
In order to implement targeted perturbation decoding, as shown in fig. 2, the embodiment of the present disclosure may perform decoding based on a first sub-decoder and a second sub-decoder included in a decoder to generate a perturbation feature map, which may specifically be implemented according to the following steps:
decoding the third feature graph by using a first sub-decoder to obtain a plurality of decoded feature subgraphs, and decoding the selected disturbance limiting item by using a second sub-decoder to obtain a disturbance weight matched with each feature subgraph in the plurality of feature subgraphs;
and step two, generating a disturbance feature map based on the plurality of feature subgraphs and the disturbance weight.
Here, the first sub-decoder is used for decoding the spliced third feature map, the decoded multiple feature sub-maps can represent the relevant information of the third feature map from each dimension, and can provide available components for the disturbance features, and the second sub-decoder can decode the weight vector matched with the feature sub-maps for the selected disturbance limiting item, so as to form the final disturbance feature map.
It should be noted that, in the case that a plurality of feature subgraphs and corresponding weight vectors are decoded, the disturbance feature graph may be determined based on a weighted summation operation.
Considering that the process of feature extraction using the first encoder and the second encoder can be understood as mapping from low-dimensional features to high-dimensional features, and a certain degree of spatial dimension information will be lost in the process of such mapping, the disclosed embodiment combines the first sub-decoder to recover a part of spatial dimensions, that is, here, the decoding process related to the first sub-decoder can be understood as a process of adaptively recovering the spatial dimensions lost in the encoding stage.
The first sub-decoder in the embodiments of the present disclosure may be trained using a deep neural network with an deconvolution layer. In the process of specific application, the conversion from the high-dimensional feature to the low-dimensional feature can be realized by three deconvolution layers, for example, the conversion operation can be realized by a first deconvolution layer with a convolution kernel size of 4 × 4, a second deconvolution layer with a convolution kernel size of 4 × 4, and a third deconvolution layer with a convolution kernel size of 7 × 7 in sequence, so as to obtain a plurality of feature subgraphs.
It should be noted that, in order to improve the decoding performance of the first sub-decoder, before performing the deconvolution operation, the relevant residual error operation may also be implemented by a plurality of residual error network layers.
Matching the first sub-decoder is a network structure of the second sub-decoder, and the second sub-decoder in the embodiment of the present disclosure may implement the determination of the perturbation weight by using a plurality of activation layers, linear regression layers, and other network layers.
In the embodiment of the present disclosure, the disturbance limits matched by the multiple feature sub-graphs and each feature sub-graph may be obtained synchronously, which mainly considers that in the process of training two sub-decoders, decoding related to the first sub-decoder may be implemented through a feature dictionary, where the generation process of the related feature sub-graph may be guided through feedback information of a subsequent model, and meanwhile, a relevant disturbance weight may also be determined through decoding of the second sub-decoder to determine the degree of influence of each feature sub-graph on the finally generated disturbance feature graph, so that disturbance requirements of different disturbance limit items may be adapted.
The feature dictionary in the embodiment of the present disclosure may be formed of individual basis vectors, and stored in a database in advance. The number of the decoded feature subgraphs related to the third feature graph may be preset, and in the case of actually training the first decoder, the training may be a process of selecting which basis vectors from the feature dictionary of the database to be used as the decoded features of the third feature graph.
In the initial stage of training, a plurality of initial characteristic subgraphs (each characteristic subgraph can correspond to a base vector) can be selected, whether the selected characteristic subgraph is accurate enough or not is determined through output information of a subsequent model, and model training can be adjusted through feedback information under the condition that the selected characteristic subgraph is not accurate enough.
Considering the critical role of the training process of the neural network to combat image generation, a detailed description may follow. In the case that the source image is a source image sample and the target image is a target image sample, the process of training the neural network may be: and performing at least one round of training on the neural network to be trained based on the source image sample and the target image sample until the image similarity between the confrontation image sample and the target image sample output by the neural network is greater than the preset similarity, and thus training to obtain the neural network.
The number of target image samples may be one or plural. For each target image sample, a round of training of the neural network may be performed. Firstly, a target image sample and a source image sample can be selected as input samples of the neural network training of the current round to be trained, so that the image similarity between an antagonistic image sample output by the neural network and the target image sample can be determined, the neural network is subjected to adaptive adjustment by using the training purpose that the image similarity is expected to be greater than the preset similarity by the neural network, then, the next target image sample can be selected to be subjected to the neural network training of the next round, and the process is repeated until the trained neural network is obtained under the training cut-off condition.
In the embodiment of the present disclosure, based on the source image sample and the target image sample, a round of training may be performed on the neural network to be trained according to the following steps:
inputting a source image sample to a first encoder in a neural network to be trained to obtain a first feature map output by the first encoder; inputting the target image sample to a second encoder in the neural network to be trained to obtain a second feature map output by the second encoder;
step two, obtaining a disturbance feature map output by a decoder based on the first feature map, the second feature map, a preset candidate disturbance limiting item and the decoder in the neural network to be trained;
fusing the disturbance characteristic graph output by the decoder and the target image sample, and determining a confrontation image sample corresponding to the source image sample;
and step four, adjusting parameters of the neural network to be trained based on the confrontation image sample, the target image sample and the trained similarity model, completing the training of the current round, and performing the next round of training until the network training cutoff condition is reached.
In the embodiment of the present disclosure, the process of training the neural network is similar to the process of applying the neural network, the feature encoding may be performed by two encoders respectively, and the decoding operation of the feature map after the fusion may also be performed by a decoder, and the related process specifically refers to fig. 2 and the related description, and is not described herein again.
Considering that different application scenarios may require different interference levels of the countermeasure image, the training may be performed on a plurality of candidate disturbance limiting items in the training phase of the neural network. In the embodiment of the present disclosure, for each candidate disturbance limiting item, the spliced third feature map and the candidate disturbance limiting item may be input to a decoder in the neural network to be trained, so as to obtain a disturbance feature map output by the decoder and corresponding to the candidate disturbance limiting item, that is, each candidate disturbance limiting item may correspond to one disturbance feature map. In this way, for each target image sample, each of the plurality of disturbance feature maps and the target image sample can be fused, so that a plurality of confrontation image samples corresponding to the source image sample can be obtained.
The image processing method provided in the embodiments of the present disclosure may implement training of the neural network based on the feedback result of the trained similarity model. Here, after each round of training is performed, the generated plurality of confrontational image samples and target image samples may be input into a trained similarity model, an image similarity between each confrontational image sample and the target image sample output by the neural network is determined, and in a case where the image similarity between any one of the confrontational image samples and the target image sample is less than a preset similarity (e.g., 0.5), parameter adjustment may be performed on the neural network until a model training cutoff condition is reached.
The model training cutoff condition in the embodiment of the present disclosure may be that the model training round reaches a preset threshold (for example, 100 rounds), may also be that the model training cutoff condition is determined to be reached in the case of traversing all target image samples, and may also be other cutoff conditions, which is not specifically limited by the embodiment of the present disclosure.
Here, it is considered that the features of the target image samples included in the confrontation image samples generated corresponding to different candidate disturbance limiting items are also different, and therefore, different preset similarities are determined for different disturbance limiting items here. In the embodiment of the present disclosure, a larger preset similarity may be correspondingly set for the disturbance limiting item that may cause larger disturbance interference, for example, to be set to 0.8, and a smaller preset similarity may be correspondingly set for the disturbance limiting item that may cause smaller disturbance interference, for example, to be set to 0.4.
In the embodiment of the present disclosure, similar to the decoding process performed by using the trained first sub-decoder and second sub-decoder in the application stage, the decoding mode may also be used in the training stage, that is, the spliced third feature map may be input to the first sub-decoder in the decoder to obtain a plurality of feature sub-maps output by the first sub-decoder; and inputting the candidate disturbance limiting item to a second sub-decoder in the decoder to obtain a disturbance weight output by the second sub-decoder and matched with each feature subgraph, and then obtaining a disturbance feature graph output by the decoder based on the plurality of feature subgraphs output by the first sub-decoder and the disturbance weight output by the second sub-decoder and matched with each feature subgraph, which is described in the above for details and is not described herein again.
The method for processing the image provided by the embodiment of the disclosure can be applied to the process of generating the image, and can be implemented according to the following steps:
step one, obtaining an original image;
and step two, generating a confrontation image corresponding to the original image by using an image processing method.
Considering that the countermeasure image generated by the image processing method is generated under the relevant disturbance limitation, the countermeasure image generated here can be an image which looks similar to the source image, so that the user can hardly perceive the true change by naked eyes by using the countermeasure image displayed by the real device, and the user experience cannot be reduced, and meanwhile, the generated countermeasure image covers the relevant information of the interference image, so that even if the issued countermeasure image is stolen by a bad platform, the image can seriously interfere with the judgment of the recognition model, and the user privacy is ensured not to be leaked.
The image generation method provided by the embodiment of the disclosure can be applied to different application scenes.
For example, in the case of an Application (APP) of a relevant self-media platform applied to a user terminal, an original image may be a face image uploaded by a user, and a countermeasure image corresponding to the original image may be an image that looks similar to the human body image but has a certain characteristic interference, so that, even if the countermeasure image is displayed, real identity information of the user is not revealed, and user privacy may be better protected.
For another example, in the case of an identification monitoring system applied to a large shopping mall, the original image may be a pedestrian picture captured by a monitoring device, and the countermeasure image may also be generated by using the above method, so as to ensure that the privacy of the user is not revealed without substantially changing the face of the user.
Therefore, the image generation method provided by the embodiment of the disclosure has the characteristics of rapidness, optional target and optional disturbance size, and can be applied to not only providing the image privacy protection service, but also other application services, and is not described herein again.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, an image processing apparatus corresponding to the image processing method and an image generating apparatus corresponding to the image generating method are also provided in the embodiments of the present disclosure, and since the principle of the apparatus in the embodiments of the present disclosure for solving the problem is similar to the method described above in the embodiments of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and repeated details are omitted.
Example two
Referring to fig. 3, a schematic diagram of an apparatus for image processing according to an embodiment of the present disclosure is shown, where the apparatus includes: the system comprises an acquisition module 301, an extraction module 302, a generation module 303 and a fusion module 304; wherein,
an obtaining module 301, configured to obtain a source image and a target image, where the target image is used to interfere with feature recognition of the source image;
an extraction module 302, configured to perform feature extraction on the source image and the target image respectively to obtain a first feature map and a second feature map;
a generating module 303, configured to obtain a disturbance feature map based on the first feature map, the second feature map, and the disturbance limiting item;
and the fusion module 304 is configured to fuse the disturbance feature map and the source image to generate a confrontation image corresponding to the source image.
The disturbance characteristic map in the embodiment of the present disclosure may be obtained by processing the first characteristic map and the second characteristic map under the limitation of the disturbance limiting term. The characteristics of the relevant target image and the characteristics of the source image can be simultaneously learned through the limitation of the disturbance limiting item, so that after the generated disturbance characteristic graph and the source image are fused, a confrontation image covering the characteristics of the target image can be obtained, the whole operation process is simple, and the generation efficiency is high.
In one possible embodiment, the confrontation image corresponding to the source image is generated by a trained neural network; the neural network comprises a first encoder and a second encoder; an extracting module 302, configured to perform feature extraction on the source image and the target image respectively according to the following steps to obtain a first feature map and a second feature map:
performing feature extraction on a source image by using a first encoder to obtain a first feature map; and the number of the first and second groups,
and performing feature extraction on the target image by using a second encoder to obtain a second feature map.
In one possible embodiment, the disturbance limiting term includes a limiting term selected from a plurality of candidate disturbance limiting terms; a generating module 303, configured to obtain a disturbance feature map based on the first feature map, the second feature map, and the disturbance limiting term according to the following steps:
splicing the first characteristic diagram and the second characteristic diagram to obtain a spliced third characteristic diagram;
and extracting disturbance information of the third feature map based on the selected disturbance limiting item to obtain a disturbance feature map.
In one possible embodiment, the confrontation image corresponding to the source image is generated by a trained neural network, the neural network further comprising a decoder; the generating module 303 is configured to extract disturbance information from the third feature map based on the selected disturbance limiting item according to the following steps to obtain a disturbance feature map:
and based on the selected disturbance limiting item, carrying out disturbance information extraction on the third feature map by using a decoder to obtain a disturbance feature map.
In one possible embodiment, the decoder comprises a first sub-decoder and a second sub-decoder; the generating module 303 is configured to extract the disturbance information of the third feature map by using a decoder based on the selected disturbance limiting item according to the following steps to obtain a disturbance feature map:
decoding the third feature graph by using the first sub-decoder to obtain a plurality of decoded feature sub-graphs, and decoding the selected disturbance limiting item by using the second sub-decoder to obtain a disturbance weight matched with each feature sub-graph in the plurality of feature sub-graphs;
and generating a disturbance feature map based on the plurality of feature subgraphs and the disturbance weight.
In a possible implementation manner, in the case that the source image is a source image sample and the target image is a target image sample, the apparatus further includes:
the training module 305 is configured to perform at least one round of training on a neural network to be trained based on the source image sample and the target image sample until an image similarity between the confrontation image sample and the target image sample output by the neural network is greater than a preset similarity, and train to obtain the neural network.
Referring to fig. 4, a schematic diagram of an apparatus for generating an image according to an embodiment of the present disclosure is shown, where the apparatus includes: an acquisition module 401 and a generation module 402; wherein,
an obtaining module 401, configured to obtain a source image provided by a user side;
a generating module 402, configured to generate a confrontation image corresponding to the source image by using the image processing method.
In a possible embodiment, the above apparatus further comprises:
and a display module 403 for displaying the confrontation image through the display device.
In some embodiments, the original image and the counter image comprise face images.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
EXAMPLE III
An embodiment of the present disclosure further provides an electronic device, as shown in fig. 5, which is a schematic structural diagram of the electronic device provided in the embodiment of the present disclosure, and the electronic device includes: a processor 501, a memory 502, and a bus 503. The memory 502 stores machine-readable instructions executable by the processor 501 (for example, execution instructions corresponding to the acquiring module 301, the extracting module 302, the generating module 303, and the fusing module 304 in the image processing apparatus in fig. 3, and the like), when the electronic device is operated, the processor 501 and the memory 502 communicate through the bus 503, and when the processor 501 executes the following processes:
acquiring a source image and a target image, wherein the target image is used for interfering the feature recognition of the source image;
respectively extracting features of a source image and a target image to obtain a first feature map and a second feature map;
obtaining a disturbance characteristic diagram based on the first characteristic diagram, the second characteristic diagram and the disturbance limiting item;
and fusing the disturbance characteristic graph and the source image to generate a confrontation image corresponding to the source image.
In one possible embodiment, the confrontation image corresponding to the source image is generated by a trained neural network; the neural network comprises a first encoder and a second encoder;
in the instruction executed by the processor 501, the performing feature extraction on the source image and the target image respectively to obtain a first feature map and a second feature map includes:
performing feature extraction on a source image by using a first encoder to obtain a first feature map; and the number of the first and second groups,
and performing feature extraction on the target image by using a second encoder to obtain a second feature map.
In one possible embodiment, the disturbance limiting term includes a limiting term selected from a plurality of candidate disturbance limiting terms; in the instruction executed by the processor 501, obtaining the disturbance feature map based on the first feature map, the second feature map, and the disturbance limiting term includes:
splicing the first characteristic diagram and the second characteristic diagram to obtain a spliced third characteristic diagram;
and extracting disturbance information of the third feature map based on the selected disturbance limiting item to obtain a disturbance feature map.
In one possible embodiment, the neural network further comprises a decoder; the confrontation image corresponding to the source image is generated by the trained neural network, and in the instruction executed by the processor 501, the disturbance information extraction is performed on the third feature map based on the selected disturbance limiting item to obtain a disturbance feature map, where the disturbance feature map includes:
and based on the selected disturbance limiting item, carrying out disturbance information extraction on the third feature map by using a decoder to obtain a disturbance feature map.
In one possible embodiment, the decoder comprises a first sub-decoder and a second sub-decoder; in the instruction executed by the processor 501, based on the selected disturbance limiting item, the third feature map is subjected to disturbance information extraction by using a decoder, so as to obtain a disturbance feature map, where the disturbance feature map includes:
decoding the third feature graph by using the first sub-decoder to obtain a plurality of decoded feature sub-graphs, and decoding the selected disturbance limiting item by using the second sub-decoder to obtain a disturbance weight matched with each feature sub-graph in the plurality of feature sub-graphs;
and generating a disturbance feature map based on the plurality of feature subgraphs and the disturbance weight.
In a possible implementation, in the case that the source image is a source image sample and the target image is a target image sample, the processor 501 executes instructions to train the neural network according to the following steps:
and performing at least one round of training on the neural network to be trained based on the source image sample and the target image sample until the image similarity between the confrontation image sample and the target image sample output by the neural network is greater than the preset similarity, and training to obtain the neural network.
For the specific execution process of the instruction, reference may be made to the steps of the image processing method in the embodiment of the present disclosure, and details are not described here.
An embodiment of the present disclosure further provides an electronic device, as shown in fig. 6, which is a schematic structural diagram of the electronic device provided in the embodiment of the present disclosure, and the electronic device includes: a processor 601, a memory 602, and a bus 603. The memory 602 stores machine-readable instructions executable by the processor 601 (for example, execution instructions corresponding to the obtaining module 401 and the generating module 402 in the apparatus for generating an image in fig. 4, and the like), when the electronic device runs, the processor 601 and the memory 602 communicate via the bus 603, and when the machine-readable instructions are executed by the processor 601, the following processes are performed:
acquiring an original image;
and generating a confrontation image corresponding to the original image by using the image processing method.
In a possible implementation manner, the instructions executed by the processor 601 further include:
the confrontational image is presented by the display device.
In some embodiments, the original image and the counter image comprise face images.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, performs the steps of the method for image processing or the steps of the method for image generation described in the above-mentioned method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the image processing method or the image generating method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the image processing method or the image generating method described in the above method embodiments, which may be specifically referred to in the above method embodiments and are not described herein again.
The embodiments of the present disclosure also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (13)
1. A method of image processing, the method comprising:
acquiring a source image and a target image, wherein the target image is used for interfering the feature recognition of the source image;
respectively extracting the features of the source image and the target image to obtain a first feature map and a second feature map;
obtaining a disturbance feature map based on the first feature map, the second feature map and a disturbance limiting item;
and fusing the disturbance characteristic graph and the source image to generate a confrontation image corresponding to the source image.
2. The method of claim 1, wherein the confrontation image corresponding to the source image is generated by a trained neural network; the neural network comprises a first encoder and a second encoder;
respectively extracting the features of the source image and the target image to obtain a first feature map and a second feature map, wherein the steps of:
performing feature extraction on the source image by using the first encoder to obtain a first feature map; and the number of the first and second groups,
and performing feature extraction on the target image by using the second encoder to obtain a second feature map.
3. The method of claim 2, wherein the perturbation limiting term comprises a limiting term selected from a plurality of candidate perturbation limiting terms; obtaining a disturbance feature map based on the first feature map, the second feature map and a disturbance limiting term, wherein the obtaining of the disturbance feature map comprises:
splicing the first characteristic diagram and the second characteristic diagram to obtain a spliced third characteristic diagram;
and extracting disturbance information of the third feature map based on the selected disturbance limiting item to obtain the disturbance feature map.
4. The method of claim 3, wherein the countermeasure image corresponding to the source image is generated by a trained neural network, the neural network including a decoder; extracting disturbance information of the third feature map based on the selected disturbance limiting item to obtain the disturbance feature map, wherein the extracting includes:
and based on the selected disturbance limiting item, carrying out disturbance information extraction on the third feature map by using the decoder to obtain the disturbance feature map.
5. The method of claim 4, wherein the decoder comprises a first sub-decoder and a second sub-decoder; the extracting disturbance information of the third feature map by using the decoder based on the selected disturbance limiting item to obtain the disturbance feature map includes:
decoding the third feature graph by using the first sub-decoder to obtain a plurality of decoded feature subgraphs, and decoding the selected disturbance limiting item by using the second sub-decoder to obtain a disturbance weight matched with each feature subgraph in the plurality of feature subgraphs;
generating the perturbation feature map based on the plurality of feature sub-maps and the perturbation weight.
6. The method according to any one of claims 2-5, wherein in case the source image is a source image sample and the target image is a target image sample, the neural network is trained according to the following steps:
and performing at least one round of training on a neural network to be trained based on the source image sample and the target image sample until the image similarity between the confrontation image sample output by the neural network and the target image sample is greater than a preset similarity, and training to obtain the neural network.
7. A method of image generation, the method comprising:
acquiring an original image;
generating a countermeasure image corresponding to the original image using the method of image processing of any of claims 1-6.
8. The method of claim 7, further comprising:
presenting the confrontation image through a display device.
9. The method of claim 7 or 8, wherein the original image and the antagonistic image comprise face images.
10. An apparatus for image processing, the apparatus comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a source image and a target image, and the target image is used for interfering the feature recognition of the source image;
the extraction module is used for respectively extracting the features of the source image and the target image to obtain a first feature map and a second feature map;
the generating module is used for obtaining a disturbance feature map based on the first feature map, the second feature map and a disturbance limiting item;
and the fusion module is used for fusing the disturbance characteristic graph and the source image to generate a confrontation image corresponding to the source image.
11. An apparatus for image generation, the apparatus comprising:
the acquisition module is used for acquiring an original image;
a generating module for generating a countermeasure image corresponding to the original image using the method of image processing according to any one of claims 1 to 6.
12. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the method of image processing according to any one of claims 1 to 6 or the steps of the method of image generation according to any one of claims 7 to 9.
13. A computer-readable storage medium, characterized in that a computer program is stored thereon, which, when being executed by an electronic device, performs the steps of the method of image processing according to any one of claims 1 to 6 or the steps of the method of image generation according to any one of claims 7 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011349027.3A CN112418332B (en) | 2020-11-26 | 2020-11-26 | Image processing method and device and image generation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011349027.3A CN112418332B (en) | 2020-11-26 | 2020-11-26 | Image processing method and device and image generation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112418332A true CN112418332A (en) | 2021-02-26 |
CN112418332B CN112418332B (en) | 2022-09-23 |
Family
ID=74842950
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011349027.3A Active CN112418332B (en) | 2020-11-26 | 2020-11-26 | Image processing method and device and image generation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112418332B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113223101A (en) * | 2021-05-28 | 2021-08-06 | 支付宝(杭州)信息技术有限公司 | Image processing method, device and equipment based on privacy protection |
CN113239851A (en) * | 2021-05-27 | 2021-08-10 | 支付宝(杭州)信息技术有限公司 | Privacy image processing method, device and equipment based on privacy protection |
CN113312668A (en) * | 2021-06-08 | 2021-08-27 | 支付宝(杭州)信息技术有限公司 | Image identification method, device and equipment based on privacy protection |
WO2022184019A1 (en) * | 2021-03-05 | 2022-09-09 | 腾讯科技(深圳)有限公司 | Image processing method and apparatus, and device and storage medium |
CN115100614A (en) * | 2022-06-21 | 2022-09-23 | 重庆长安汽车股份有限公司 | Evaluation method and device of vehicle perception system, vehicle and storage medium |
WO2023082162A1 (en) * | 2021-11-12 | 2023-05-19 | 华为技术有限公司 | Image processing method and apparatus |
WO2024021134A1 (en) * | 2022-07-25 | 2024-02-01 | 首都师范大学 | Image processing method and apparatus, computer device and storage medium |
WO2024045421A1 (en) * | 2022-08-30 | 2024-03-07 | 浪潮(北京)电子信息产业有限公司 | Image protection method and related device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190188562A1 (en) * | 2017-12-15 | 2019-06-20 | International Business Machines Corporation | Deep Neural Network Hardening Framework |
CN110210573A (en) * | 2019-06-11 | 2019-09-06 | 腾讯科技(深圳)有限公司 | Fight generation method, device, terminal and the storage medium of image |
CN110705652A (en) * | 2019-10-17 | 2020-01-17 | 北京瑞莱智慧科技有限公司 | Countermeasure sample, generation method, medium, device and computing equipment thereof |
CN111967592A (en) * | 2020-07-09 | 2020-11-20 | 中国电子科技集团公司第三十六研究所 | Method for generating counterimage machine recognition based on positive and negative disturbance separation |
-
2020
- 2020-11-26 CN CN202011349027.3A patent/CN112418332B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190188562A1 (en) * | 2017-12-15 | 2019-06-20 | International Business Machines Corporation | Deep Neural Network Hardening Framework |
CN110210573A (en) * | 2019-06-11 | 2019-09-06 | 腾讯科技(深圳)有限公司 | Fight generation method, device, terminal and the storage medium of image |
CN110705652A (en) * | 2019-10-17 | 2020-01-17 | 北京瑞莱智慧科技有限公司 | Countermeasure sample, generation method, medium, device and computing equipment thereof |
CN111967592A (en) * | 2020-07-09 | 2020-11-20 | 中国电子科技集团公司第三十六研究所 | Method for generating counterimage machine recognition based on positive and negative disturbance separation |
Non-Patent Citations (2)
Title |
---|
YAN LU ET AL.: "Cross-Modality Person Re-Identification With Shared-Specific Feature Transfer", 《IEEE XPLORE》 * |
潘文雯 等: "对抗样本生成技术综述", 《软件学报》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022184019A1 (en) * | 2021-03-05 | 2022-09-09 | 腾讯科技(深圳)有限公司 | Image processing method and apparatus, and device and storage medium |
CN113239851A (en) * | 2021-05-27 | 2021-08-10 | 支付宝(杭州)信息技术有限公司 | Privacy image processing method, device and equipment based on privacy protection |
CN113239851B (en) * | 2021-05-27 | 2023-06-23 | 支付宝(杭州)信息技术有限公司 | Privacy image processing method, device and equipment based on privacy protection |
CN113223101A (en) * | 2021-05-28 | 2021-08-06 | 支付宝(杭州)信息技术有限公司 | Image processing method, device and equipment based on privacy protection |
CN113312668A (en) * | 2021-06-08 | 2021-08-27 | 支付宝(杭州)信息技术有限公司 | Image identification method, device and equipment based on privacy protection |
WO2023082162A1 (en) * | 2021-11-12 | 2023-05-19 | 华为技术有限公司 | Image processing method and apparatus |
CN115100614A (en) * | 2022-06-21 | 2022-09-23 | 重庆长安汽车股份有限公司 | Evaluation method and device of vehicle perception system, vehicle and storage medium |
WO2024021134A1 (en) * | 2022-07-25 | 2024-02-01 | 首都师范大学 | Image processing method and apparatus, computer device and storage medium |
WO2024045421A1 (en) * | 2022-08-30 | 2024-03-07 | 浪潮(北京)电子信息产业有限公司 | Image protection method and related device |
Also Published As
Publication number | Publication date |
---|---|
CN112418332B (en) | 2022-09-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112418332B (en) | Image processing method and device and image generation method and device | |
CN111242088A (en) | Target detection method and device, electronic equipment and storage medium | |
CN112052942B (en) | Neural network model training method, device and system | |
CN111598111B (en) | Three-dimensional model generation method, device, computer equipment and storage medium | |
CN113409437B (en) | Virtual character face pinching method and device, electronic equipment and storage medium | |
CN110765795B (en) | Two-dimensional code identification method and device and electronic equipment | |
US20140212044A1 (en) | Image Matching Using Subspace-Based Discrete Transform Encoded Local Binary Patterns | |
CN114078201B (en) | Multi-target class confrontation sample generation method and related equipment | |
CN110991298A (en) | Image processing method and device, storage medium and electronic device | |
CN112529897A (en) | Image detection method and device, computer equipment and storage medium | |
CN113537254A (en) | Image feature extraction method and device, electronic equipment and readable storage medium | |
CN112733946A (en) | Training sample generation method and device, electronic equipment and storage medium | |
CN112802081A (en) | Depth detection method and device, electronic equipment and storage medium | |
CN113255575B (en) | Neural network training method and device, computer equipment and storage medium | |
CN111160251A (en) | Living body identification method and device | |
CN114360015A (en) | Living body detection method, living body detection device, living body detection equipment and storage medium | |
CN114339049A (en) | Video processing method and device, computer equipment and storage medium | |
CN111382654A (en) | Image processing method and apparatus, and storage medium | |
CN113642359B (en) | Face image generation method and device, electronic equipment and storage medium | |
CN111428612A (en) | Pedestrian re-identification method, terminal, device and storage medium | |
Muddamsetty et al. | Salient objects detection in dynamic scenes using color and texture features | |
CN115880530A (en) | Detection method and system for resisting attack | |
CN113591969B (en) | Face similarity evaluation method, device, equipment and storage medium | |
CN115497176A (en) | Living body detection model training method, living body detection method and system | |
CN113887518A (en) | Behavior detection method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |