CN113569611A - Image processing method, image processing device, computer equipment and storage medium - Google Patents
Image processing method, image processing device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN113569611A CN113569611A CN202110177584.XA CN202110177584A CN113569611A CN 113569611 A CN113569611 A CN 113569611A CN 202110177584 A CN202110177584 A CN 202110177584A CN 113569611 A CN113569611 A CN 113569611A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- current
- feature
- confrontation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The application relates to an image processing method, an image processing device, a computer device and a storage medium. The method comprises the following steps: acquiring a target image of a target image category and a current confrontation image corresponding to the confrontation image category; performing feature extraction on the target image to obtain target extraction features, and obtaining reference extraction features based on the target extraction features; performing feature extraction on the current confrontation image to obtain confrontation extraction features; calculating a feature difference value between the confrontation extraction feature and the reference extraction feature, and obtaining a target image loss value based on the feature difference value; and obtaining an interference adjustment value based on the target image loss value, and adjusting the pixel value of the current confrontation image based on the interference adjustment value to obtain a target confrontation image corresponding to the target image. The method can improve the confrontation effect of the confrontation sample. The image recognition model in the application can be based on the artificial intelligence neural network model, and the scheme is applied to the field of artificial intelligence and can improve the reliability of the neural network model.
Description
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to an image processing method and apparatus, a computer device, and a storage medium.
Background
With the development of artificial intelligence technology, a countermeasure image, which is an image formed by intentionally adding a slight disturbance to an image that causes a recognition model to give an erroneous output with high confidence, has been widely used in various fields.
The countermeasure image can be applied to the training of the model to improve the attack bearing capacity of the model, and can also be used for encrypting the image. For example, in the process of using the neural network model, there is a case where the neural network model is attacked maliciously, and an attacker can input an aggressive image into the neural network model, so that the neural network model outputs an erroneous result. In order to prevent the neural network model from being maliciously attacked, the reliability of the neural network model needs to be improved, for example, a countermeasure sample can be input into the neural network model for training, so that the capability of the neural network model for identifying the attack sample is improved, and the robustness of the neural network model is improved. However, at present, the obtained challenge sample has a poor challenge effect.
Disclosure of Invention
In view of the above, it is necessary to provide an image processing method, an apparatus, a computer device, and a storage medium capable of improving the countermeasure effect against a sample, in view of the above technical problems.
A method of image processing, the method comprising: acquiring a target image of a target image category and a current confrontation image corresponding to the confrontation image category; performing feature extraction on the target image to obtain target extraction features, and obtaining reference extraction features based on the target extraction features; performing feature extraction on the current confrontation image to obtain confrontation extraction features; calculating a feature difference value between the confrontation extraction feature and the reference extraction feature, and obtaining a target image loss value based on the feature difference value; the characteristic difference value and the target image loss value form a positive correlation; and obtaining an interference adjustment value based on the target image loss value, and adjusting the pixel value of the current confrontation image based on the interference adjustment value to obtain a target confrontation image corresponding to the target image.
An image processing apparatus, the apparatus comprising: the current confrontation image acquisition module is used for acquiring a target image of a target image category and a current confrontation image corresponding to the confrontation image category; a reference extraction feature obtaining module, configured to perform feature extraction on the target image to obtain a target extraction feature, and obtain a reference extraction feature based on the target extraction feature; the confrontation extraction feature obtaining module is used for carrying out feature extraction on the current confrontation image to obtain confrontation extraction features; a target image loss value obtaining module, configured to calculate a feature difference value between the confrontation extraction feature and the reference extraction feature, and obtain a target image loss value based on the feature difference value; the characteristic difference value and the target image loss value form a positive correlation; and the target confrontation image obtaining module is used for obtaining an interference adjustment value based on the target image loss value, adjusting the pixel value of the current confrontation image based on the interference adjustment value, and obtaining a target confrontation image corresponding to the target image.
In some embodiments, the target image is a plurality of images, and the reference extraction feature obtaining module includes: the target extraction feature obtaining unit is used for extracting features of the target images to obtain target extraction features corresponding to the target images; the current appearance possibility calculation unit is used for acquiring current feature distribution and calculating the current appearance possibility of each target extraction feature in the current feature distribution; a target feature distribution obtaining unit, configured to count current occurrence probability corresponding to each target extracted feature to obtain a current probability statistic, and adjust a feature distribution parameter corresponding to current feature distribution to change the current probability statistic in a direction of increasing to obtain target feature distribution; and the reference extraction feature obtaining unit is used for obtaining the target representative features corresponding to the target feature distribution to obtain the reference extraction features.
In some embodiments, the current feature distribution includes a first current feature distribution and a second current feature distribution, and the current occurrence probability includes a first current occurrence probability corresponding to the first current feature distribution and a second current occurrence probability corresponding to the second current feature distribution; the target feature distribution obtaining unit is further configured to obtain a first current distribution weight corresponding to the first current feature distribution and a second current distribution weight corresponding to the second current feature distribution; based on the first current distribution weight and the first current occurrence probability, performing weighted summation on the second current distribution weight and the second current occurrence probability to obtain a current occurrence probability corresponding to the target extraction feature; counting the current occurrence probability corresponding to each target extraction feature to obtain a current probability statistic value, and adjusting the distribution parameters corresponding to the first current feature distribution and the second current feature distribution, the first current distribution weight and the second current distribution weight to change the current probability statistic value towards the increasing direction to obtain a first target feature distribution, a second target feature distribution, a first target distribution weight and a second target distribution weight.
In some embodiments, the reference extracted feature obtaining unit is further configured to determine a first target occurrence probability of the confrontation extracted feature in the first target feature distribution, and obtain a first weighted probability based on the first target distribution weight and the first target occurrence probability; determining a second target occurrence probability of the confrontation extraction features in the second target feature distribution, and obtaining a second weighted probability based on the second target distribution weight and the second target occurrence probability; selecting a feature distribution with the highest probability from the first target feature distribution and the second target feature distribution as a representative feature distribution based on the first weighted probability and the second weighted probability; and acquiring the target representative features corresponding to the representative feature distribution as the reference extraction features.
In some embodiments, the target image loss value derivation module comprises: a first image loss value obtaining unit, configured to obtain a first image loss value based on the feature difference value, where the feature difference value and the first image loss value have a positive correlation; a first confidence degree determining unit, configured to determine a first confidence degree of the current confrontation image in the target image category based on the confrontation extraction features; a second image loss value obtaining unit, configured to obtain a second image loss value based on the first confidence; the second image loss value is in a negative correlation with the first confidence; a target image loss value obtaining unit, configured to obtain a target image loss value based on the first image loss value and the second image loss value.
In some embodiments, the second image loss value obtaining unit is further configured to determine a second confidence level that the current confrontation image is in the confrontation image category based on the confrontation extraction features; obtaining a second image loss value based on a first confidence difference value between the first confidence and the second confidence; the second image loss value is in a negative correlation with the first confidence difference value.
In some embodiments, the target image loss value derivation module comprises: a first image loss value obtaining unit, configured to obtain a first image loss value based on the feature difference value, where the feature difference value and the first image loss value have a positive correlation; a reference confidence determining unit for determining the reference confidence of the current confrontation image in each reference image category based on the confrontation extraction features; a third confidence obtaining unit, configured to select a maximum confidence from the reference confidences as a third confidence; a third image loss value obtaining unit, configured to obtain a third image loss value based on a second confidence difference value between the first confidence and the third confidence; the third image loss value and the second confidence difference value are in a negative correlation relationship; a target image loss value obtaining unit, configured to obtain a target image loss value based on the first image loss value and the third image loss value.
In some embodiments, the target confrontation image derivation module includes: an updated current confrontation image obtaining unit, configured to, when the second confidence difference value is smaller than a confidence difference threshold, adjust a pixel value of the current confrontation image based on the interference adjustment value, to obtain an updated current confrontation image; and the target countermeasure image obtaining unit is used for returning to the step of extracting the features of the current countermeasure image to obtain the countermeasure extraction features until the second confidence difference value reaches the confidence difference threshold value, and taking the current countermeasure image as the target countermeasure image corresponding to the target image.
In some embodiments, the target confrontation image derivation module includes: an updated current confrontation image obtaining unit, configured to adjust a pixel value of the current confrontation image based on the interference adjustment value, so as to obtain an updated current confrontation image; and the target countermeasure image acquisition unit is used for calculating a current image difference value between the updated current countermeasure image and the original countermeasure image corresponding to the current countermeasure image, returning to the step of performing feature extraction on the current countermeasure image when the current image difference value is smaller than an image difference threshold value to obtain a countermeasure extraction feature until the current image difference value reaches the image difference threshold value, and taking the current countermeasure image as the target countermeasure image corresponding to the target image.
In some embodiments, the interference adjustment value includes a pixel adjustment value corresponding to each pixel point, and the target countermeasure image obtaining module includes: a pixel adjustment value obtaining unit, configured to derive the target image loss value based on pixel values of all pixel points of the current confrontation image, so as to obtain pixel adjustment values corresponding to all the pixel points in the current confrontation image; and the pixel value adjusting unit is used for adjusting the pixel values of the pixels in the current confrontation image based on the pixel adjusting values respectively corresponding to the pixels to obtain the target confrontation image corresponding to the target image.
In some embodiments, the apparatus further comprises: the countermeasure confidence coefficient obtaining module is used for inputting the target countermeasure image into an image recognition model to be trained corresponding to the target image category to obtain the countermeasure confidence coefficient of the target countermeasure image corresponding to the target image category; a target model loss value determination module for determining a target model loss value based on the confrontation confidence; the target model loss value is in a positive correlation with the confrontation confidence; and the trained image recognition model obtaining module is used for adjusting the model parameters of the image recognition model based on the target model loss value to obtain the trained image recognition model.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the image processing method described above when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned image processing method.
In some embodiments, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps in the above-mentioned method embodiments.
The image processing method, the image processing device, the computer equipment and the storage medium obtain a target image of a target image category and a current confrontation image corresponding to the confrontation image category, perform feature extraction on the target image to obtain a target extraction feature, obtain a reference extraction feature based on the target extraction feature, perform feature extraction on the current confrontation image to obtain a confrontation extraction feature, calculate a feature difference value between the confrontation extraction feature and the reference extraction feature, obtain a target image loss value based on the feature difference value, wherein the feature difference value and the target image loss value are in a positive correlation relationship, obtain an interference adjustment value based on the target image loss value, adjust a pixel value of the current confrontation image based on the interference adjustment value, and obtain the target confrontation image corresponding to the target image. The reference extraction features are obtained according to the target extraction features, so that the reference extraction features can reflect the features of the images of the target image category, the feature difference value can reflect the difference between the counterextraction features and the features of the images of the target image category, the target image loss value and the feature difference value are in positive correlation, and when the interference adjustment value is adjusted towards the direction that the target image loss value is reduced, the feature difference value corresponding to the current counterimage can be adjusted towards the direction that the current counterimage is reduced, so that the similarity of the target counterimage and the images of the target image category on the features is improved, and the countereffect of the countersample is improved.
Drawings
FIG. 1A is a diagram of an application environment of an image processing method in some embodiments;
FIG. 1B is a diagram of an exemplary image processing system in accordance with certain embodiments;
FIG. 2 is a flow diagram illustrating a method of image processing in some embodiments;
FIG. 3A is a distribution plot of samples in feature space in some embodiments;
FIG. 3B is a schematic diagram of an interface for triggering an image recognition request and displaying an image recognition result in some implementations;
FIG. 4 is a schematic flow chart of the steps for obtaining reference extracted features in some embodiments;
FIG. 5 is a schematic flow chart of the steps for obtaining reference extracted features in some embodiments;
FIG. 6 is a schematic illustration of a confrontation image in some embodiments;
FIG. 7 is a flow diagram illustrating a method of image processing in some embodiments;
FIG. 8 is a block diagram of an image processing apparatus in some embodiments;
FIG. 9 is a diagram of the internal structure of a computer device in some embodiments.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and applied in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical care, smart customer service, and the like.
The scheme provided by the embodiment of the application relates to the technologies such as artificial neural network of artificial intelligence, and is specifically explained by the following embodiment:
the image processing method provided by the application can be applied to the application environment shown in fig. 1A. Wherein the terminal 102 communicates with the server 104 via a network.
The terminal 102 may transmit the acquired image to the server 104, for example, the terminal 102 may perform image acquisition and transmit the acquired image to the server 104, and the terminal 102 may further communicate with an image acquisition device, receive the image acquired by the image acquisition device, and forward the image to the server 104. The server 104 may receive the image sent by the terminal 102, obtain, from the received image, a target image of a target image category and a current countermeasure image corresponding to the countermeasure image category, perform feature extraction on the target image to obtain a target extraction feature, obtain a reference extraction feature based on the target extraction feature, perform feature extraction on the current countermeasure image to obtain a countermeasure extraction feature, calculate a feature difference value between the countermeasure extraction feature and the reference extraction feature, obtain a target image loss value based on the feature difference value, where the feature difference value is in a positive correlation with the target image loss value, obtain an interference adjustment value based on the target image loss value, adjust a pixel value of the current countermeasure image based on the interference adjustment value, and obtain the target countermeasure image corresponding to the target image. The server 104 may use the target countermeasure image as a countermeasure sample, perform countermeasure training on the image recognition model corresponding to the target image category using the target countermeasure image to obtain a trained image recognition model, and perform image recognition using the trained image recognition model. The image recognition model corresponding to the target image type is used for recognizing the image of the target image type, and can also recognize images of other types, and is a trained model. For example, the server 104 may train the initial image recognition model using the image of the target image category until the model converges to obtain a pre-trained image recognition model, that is, the pre-trained image recognition model refers to a model pre-trained with a common sample, and needs to be trained with a countermeasure sample, and may perform countermeasure training on the pre-trained image recognition model using the target countermeasure image as the image recognition model corresponding to the target image category, so as to improve the recognition capability and robustness of the pre-trained image recognition model to the countermeasure sample. The initial image recognition model may be based on a convolutional Neural Network, and may also be based on a ResNet (Residual Neural Network) Network, such as ResNet 50.
The terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, portable wearable devices, and image capturing devices. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In some embodiments, as shown in fig. 1B, an application scenario diagram of the image processing method provided in some embodiments is provided. The trained image recognition model is deployed in the cloud server 108, the front end a106 may send an image recognition request carrying a corresponding image to be recognized to the cloud server 108, the cloud server 108 may obtain the image to be recognized according to the image recognition request, and the image to be recognized is processed by using the image processing method provided by the present application, so as to obtain an image recognition result. Cloud server 108 may send the image recognition result to front end B110. The front end B110 may be a computer or a mobile phone, for example, and the front end a106 may be an image capturing device. It is understood that front end A106 and front end B110 may be the same device or different devices.
In some embodiments, as shown in fig. 2, an image processing method is provided, which is exemplified by the application of the method to the server 104 in fig. 1A, and includes the following steps:
s202, acquiring a target image of the target image category and a current confrontation image corresponding to the confrontation image category.
The image category refers to a category to which the image belongs, and may be obtained by dividing according to image information, for example, by dividing according to an object included in the image, for example, when the image includes a kitten, it may be determined that the image belongs to the cat, and when the image includes a puppy, it may be determined that the image belongs to the dog. The target image category may be any image category, such as a cat. The counter image category is a different image category than the target image category, which may be, for example, a dog.
The category of the image may be determined according to a pre-trained image recognition model that needs to be trained against, and when the pre-trained image recognition model is a model for recognizing a countermeasure in the image, the category of the image may be determined according to an object included in the image, the object in the image may be an inanimate object or an animate object, the inanimate object may be furniture, for example, may be a table or a chair, the animate object may be, for example, a plant, a bacterium, an animal, or a component in a human body, and the component in the human body may be, for example, a lung, a heart, or a fundus of the human body, and the like. For example, when the pre-trained image recognition model is a model for recognizing a "cat", the image category of the "cat" may be set as a target image category, the image of the "cat" may be set as a target image, and when the recognition capability of the pre-trained image recognition model on the countermeasure sample obtained by the image of the "dog" class is to be improved, the countermeasure image category may be set as a "dog", the image of the "dog" may be set as a current countermeasure image, or the image of the "dog" may be adjusted to obtain a current countermeasure image. For example, when the pre-trained image recognition model is a model for recognizing a human body component, for example, a model for recognizing a lung, a category of an image including "lung" may be used as a target image category, and a lung image may be acquired to obtain a target image corresponding to the target image category.
The target image refers to an image belonging to a target image class, i.e. the real image class of the target image is the target image class. The current confrontation image may be an original confrontation image which is not adjusted, or an image obtained by performing one or more interference adjustments on the original confrontation image by using the method provided by the embodiment of the application, and the original confrontation image is not adjusted and belongs to the confrontation image category. The target image and the original confrontation image may be real images directly acquired by the image acquisition device.
Specifically, the server may obtain an original countermeasure image of the countermeasure image category, may take the original countermeasure image as the current countermeasure image, or may perform a disturbance on the original countermeasure image, such as modifying a pixel value of the original countermeasure image or superimposing noise on the original countermeasure image, and take the disturbed image as the current countermeasure image.
In some embodiments, the terminal may send a countermeasure training request to the server, where the countermeasure training request may carry a model to be subjected to countermeasure training and a countermeasure image category, the server may obtain an image of the countermeasure image category, obtain a current countermeasure image according to the image of the countermeasure image category, and the server may determine an image category used for identification by the model to be subjected to countermeasure training as the target image category.
And S204, performing feature extraction on the target image to obtain target extraction features, and obtaining reference extraction features based on the target extraction features.
The target extraction features are features obtained by extracting features of the target image. The reference extraction features may be features obtained according to the target extraction features, for example, the reference extraction features may be target extraction features, and when there are a plurality of target images, the reference extraction features may also be features obtained by performing statistical calculation on target extraction features corresponding to the respective target images. Statistical calculations include, but are not limited to, mean calculations or covariance calculations.
Specifically, the server may perform feature extraction on the target image by using an artificial intelligence-based neural network model to obtain a target extraction feature, for example, the server may perform feature extraction on the target image by using a pre-trained image recognition model to obtain the target extraction feature.
In some embodiments, the server may perform feature extraction on the current confrontation image by using a pre-trained image recognition model to obtain confrontation extraction features, and determine reference extraction features based on the confrontation extraction features, where the reference extraction features corresponding to different confrontation extraction features may be the same or different. For example, the server may determine a reference extraction feature corresponding to the countermeasure extraction feature from a difference between the countermeasure extraction feature and each target extraction feature, and may use, as the reference extraction feature, a target extraction feature whose difference from the countermeasure extraction feature is small, for example.
In some embodiments, the server may obtain target extraction features corresponding to a plurality of target images, perform statistics on each target extraction feature, determine a target feature distribution that each target extraction feature satisfies, and determine a reference extraction feature corresponding to the countermeasure extraction feature based on the target feature distribution. For example, the server may use a representative feature corresponding to the target feature distribution as a reference extraction feature, for example, use a feature with the highest probability of occurrence in the target feature distribution as a reference extraction feature, and for example, use a mean value of the target feature distribution as a reference extraction feature when the target feature distribution is gaussian.
And S206, performing feature extraction on the current confrontation image to obtain confrontation extraction features.
Specifically, the confrontation extraction feature is a feature obtained by feature extraction of the current confrontation image. The manner in which the current resist image is feature extracted may be the same as the manner in which the target image is feature extracted, e.g., the resist extracted features and the target extracted features may be the results of the same feature extraction layer output of the pre-trained image recognition model. The feature extraction layer is used for extracting features and may include at least one of a linear feature extraction layer or a nonlinear feature extraction layer. The nonlinear feature extraction layer may be, for example, an activation layer.
In some embodiments, the pre-trained image recognition model includes a plurality of feature extraction layers, the server may input a target image into the pre-trained image recognition model, obtain target extraction features output by at least two feature extraction layers respectively, input a current countermeasure image into the pre-trained image recognition model, obtain countermeasure extraction features distributed and output by each feature extraction layer, and determine a reference extraction feature corresponding to the countermeasure extraction features output by the feature extraction layer according to the target extraction features and the countermeasure extraction features output by the same feature extraction layer.
In some embodiments, the server may input the target image into the pre-trained image recognition model, obtain the features output by the non-linear extraction layer of the pre-trained image recognition model, obtain the target extraction features corresponding to the target image, input the current confrontation image into the pre-trained image recognition model, obtain the features output by the non-linear extraction layer, and obtain the confrontation extraction features. Wherein the non-linear extraction layer can have one or more. Plural means at least two.
S208, calculating a feature difference value between the confrontation extraction feature and the reference extraction feature, and obtaining a target image loss value based on the feature difference value; the characteristic difference value and the target image loss value are in positive correlation.
Specifically, the feature difference value refers to a difference between the countermeasure extracted feature and the reference extracted feature. The server may calculate a distance between the confrontation extraction feature and the reference extraction feature as a feature difference value, and may calculate a euclidean distance between the confrontation extraction feature and the reference extraction feature to obtain the feature difference value, for example. The server may further calculate a similarity between the confrontation extracted feature and the reference extracted feature to obtain a feature similarity, determine a feature difference value according to the feature similarity, where the feature difference value and the feature similarity have a negative correlation relationship, and for example, the inverse number or the reciprocal of the feature similarity may be used as the feature difference value.
In some embodiments, the server may obtain a target extraction feature, a countermeasure extraction feature, and a reference extraction feature determined according to the target extraction feature, which are output by a preset feature extraction layer of the pre-trained image recognition model, calculate a feature difference value between the countermeasure extraction feature and the reference extraction feature, perform transposition operation on the feature difference value to obtain a transposed difference value, and obtain a layer image loss value based on the feature difference value and the transposed difference value, where the layer image loss value and the transposed difference value form a positive correlation. For example, when the reference extraction feature is a mean value of gaussian distribution corresponding to the target feature distribution, the server may perform multiplication operation on the shift difference value and a covariance matrix of the gaussian distribution to obtain a first operation result, perform multiplication operation on the first operation result and the feature difference value to obtain a layer image loss value corresponding to the preset feature extraction layer, obtain a target image loss value based on the image loss value, and form a positive correlation between the target image loss value and the image loss value.
The positive correlation refers to: under the condition that other conditions are not changed, the changing directions of the two variables are the same, and when one variable changes from large to small, the other variable also changes from large to small. It is understood that a positive correlation herein means that the direction of change is consistent, but does not require that when one variable changes at all, another variable must also change. For example, it may be set that the variable b is 100 when the variable a is 10 to 20, and the variable b is 120 when the variable a is 20 to 30. Thus, the change directions of a and b are both such that when a is larger, b is also larger. But b may be unchanged in the range of 10 to 20 a.
In some embodiments, the server may obtain a countermeasure recognition result of the current countermeasure image output by the pre-trained image recognition model, where the countermeasure recognition result may include a confidence that the current countermeasure image belongs to the target image category, and obtain a second image loss value based on the confidence that the current countermeasure image belongs to the target image category, where the second image loss value is in a negative correlation with the confidence that the current countermeasure image belongs to the target image category. The server may obtain a target image loss value according to the first image loss value and the second image loss value. The confidence is used for representing the possibility that the image belongs to each image category, the higher the confidence is, the higher the possibility is, and the value range of the confidence can be 0 to 1.
The negative correlation relationship refers to: under the condition that other conditions are not changed, the changing directions of the two variables are opposite, and when one variable is changed from large to small, the other variable is changed from small to large. It is understood that the negative correlation herein means that the direction of change is reversed, but it is not required that when one variable changes at all, the other variable must also change.
S210, obtaining an interference adjustment value based on the loss value of the target image, and adjusting the pixel value of the current confrontation image based on the interference adjustment value to obtain a target confrontation image corresponding to the target image.
The interference adjustment value is used for adjusting the pixel values of the pixels in the current confrontation image, and may include interference adjustment values corresponding to the pixels in the current confrontation image, respectively, for the pixels in the current confrontation image. The image recognition model corresponding to the target image category refers to a model for recognizing an image of the target image category, and is a trained and pre-trained image recognition model. Of course, the image recognition model corresponding to the target image category may also recognize images of image categories other than the target image category.
The target antagonistic image is an image obtained by adjusting the pixel values of the current antagonistic image. The target confrontation image may serve as a confrontation sample. A challenge sample is a sample that has been subjected to a challenge attack. Fighting an attack refers to causing a model based on Deep Neural Networks (DNN) to produce erroneous results by adding a small perturbation to a normal sample. The normal sample refers to normal data that has not been subjected to the countermeasure attack, and may be, for example, an image acquired by an image acquisition apparatus.
Antagonistic training (adaptive training) is an important way to enhance the robustness of neural networks, in which samples are mixed with some minor disturbances (which change little but are likely to cause misclassification) and the purpose of the antagonistic training is to adapt the neural network to such changes, thus being robust to the antagonistic samples.
Specifically, the server calculates a derivative of the target image loss value relative to the current confrontation image to obtain interference adjustment values corresponding to each pixel point in the current confrontation image. The corresponding interference adjustment values may be different for different pixel points. For a pixel point in the current confrontation image, the server can calculate a result obtained by adding or multiplying a pixel value of the pixel point and a corresponding interference adjustment value to obtain an adjusted pixel value corresponding to the pixel point, obtain a target confrontation image according to the current confrontation image after the pixel value is adjusted, use the current confrontation image after the pixel value is adjusted as the target confrontation image, and also can continue to adjust the pixel value of the current confrontation image after the pixel value is adjusted until a pixel value adjustment finishing condition is met, and use the current confrontation image meeting the pixel value adjustment finishing condition as the target confrontation image. The pixel value adjustment ending condition includes, but is not limited to, that the change amount of the target image loss value is less than the loss value change threshold, and that the difference between the adjusted current antagonistic image and the original antagonistic image corresponding to the current antagonistic image reaches the image difference threshold.
In some embodiments, after obtaining the target confrontation image, the image recognition model corresponding to the target image class may be confronted and trained by using the target confrontation image, so as to perform image recognition by using the trained image recognition model. For example, the server may perform countermeasure training on the image recognition model corresponding to the target image category by using the target countermeasure image as a negative sample, that is, perform countermeasure training on the pre-trained image recognition model, thereby improving robustness of the image recognition model to the countermeasure sample. Specifically, the server may first adopt the normal sample to train the untrained image recognition model to obtain the pre-trained image recognition model, and then train the pre-trained image recognition model by combining the normal sample and the countermeasure sample to obtain the trained image recognition model.
In the image processing method, the target image of the target image category and the current confrontation image corresponding to the confrontation image category are obtained, extracting the features of the target image to obtain target extraction features, obtaining reference extraction features based on the target extraction features, performing feature extraction on a current confrontation image to obtain confrontation extraction features, calculating a feature difference value between the confrontation extraction features and reference extraction features, obtaining a target image loss value based on the feature difference value, wherein the feature difference value and the target image loss value form a positive correlation relationship, obtaining an interference adjustment value based on the target image loss value, adjusting a pixel value of the current confrontation image based on the interference adjustment value, and obtaining a target confrontation image corresponding to the target image, and performing countermeasure training on the image recognition model corresponding to the target image class by using the target countermeasure image, and performing image recognition by using the trained image recognition model. The reference extraction features are obtained according to the target extraction features, so that the reference extraction features can reflect the features of the images of the target image category, the feature difference value can reflect the difference between the counterdraw features and the features of the images of the target image category, the target image loss value and the feature difference value are in positive correlation, and when the interference adjustment value is adjusted towards the direction that the target image loss value is reduced, the feature difference value corresponding to the current counterdraw image can be adjusted towards the direction that the current counterdraw image is reduced, so that the similarity of the target counterdraw image and the images of the target image category on the features is improved, and the counterdraw effect is improved. When the model is trained using the target confrontation image, the robustness and reliability of the model can be improved.
The image processing method provided by the application can realize the Constraint of the features output by the Feature extraction layer of the image recognition model, and can be called a Hierarchical Feature Constraint (HFC) method. The countermeasure sample generated based on the image processing method provided by the embodiment can be used for training a detection defense model, so that the defense performance of the detection defense model is improved, the robustness of the detection defense model is improved, and the security of data encryption is improved. The detection defense model is used for detecting the countermeasure sample, and the data security is improved. The countermeasure sample generated by the image processing method can be applied to medical data security management, and the effectiveness of the data security management can be improved.
Fig. 3A shows the distribution of the features of the confrontational samples generated by the image processing method provided in the present application, and fig. 3A shows the features of the samples input into the network model ResNet50 and the outputs of the 34, 42, and 48 layers of the network model ResNet 50. As can be seen from fig. 3A, the feature distribution of the countermeasure sample generated by the image processing method provided by the present application is fitted to the feature distribution of the normal sample of the target image category, so that the countermeasure sample can be hidden in the feature distribution of the normal sample of the target image category. In the feature space, the distribution of the counterattack samples generated by the bim (basic Iterative method) counterattack method is easily distinguished from the distribution of the normal samples of the target image type.
Table 1 shows the detection performance of the detection defense method when the conventional countermeasure sample and the new countermeasure sample are used for countermeasure attack, wherein the conventional countermeasure sample refers to the countermeasure sample generated by using the conventional countermeasure attack method, and the new countermeasure sample refers to the countermeasure sample generated by using the image processing method provided by the present application. In table 1, a fundus examination data set including 3663 high resolution fundus images in a classification match of Kaggle Diabetic Retinopathy (DR). Each image was labeled as one of five levels "no DR/mild/moderate/severe/diffuse DR", and two classification experiments were performed on the fundus examination data set, with cases with diabetic retinopathy considered as the same class. The Kaggle chest X-ray dataset is a dataset relating to the pneumonia classification task, which includes 5863X-ray images labeled "pneumonia" or "normal".
In table 1, MIM (metal-insulator-metal) refers to Gradient-based Iterative attack algorithm, BIM (basic Iterative method) refers to Iterative FGSM algorithm, FGSM (fast Gradient signal method) refers to fast Gradient descent method, PGD (projected Gradient decline) refers to projected Gradient descent algorithm, CW (carini and Wagner) is an optimization-based attack, carini and Wagner are names of authors of CW, C refers to an initial letter of carini, and W refers to an initial letter of Wagner. KD. MAHA, LID, SVM, DNN and BU are six detection defense methods, wherein KD, MAHA and LID are based on a distance measurement method of a high-dimensional feature space, and SVM and DNN are based on a machine learning method. Kd (kernel density) refers to nuclear density detection, MAHA (mahalanobis-distance) refers to mahalanobis distance, lid (local Intrinsic dimension) refers to local Intrinsic dimension, svm (support Vector machine) refers to support Vector machine, dnn (deep Neural network) refers to deep Neural network, bu (Bayesian uncertainty) refers to Bayesian uncertainty estimates (Bayesian uncertainty estimates).
TABLE 1 test Performance of the test defense method when challenge attacks are performed on the conventional challenge sample and the newly challenge sample
In table 1, adv.acc is the attack success rate when a new challenge sample is used to fight against an attack, and for example, when the detection defense method is KD, the dataset is an ocular fundus dataset, and the challenge attack method is MIM, the attack success rate is 99.5, when the detection defense method is KD, the dataset is a chest X-ray dataset, and the challenge attack method is MIM, the attack success rate is 98.1, respectively.
In table 1, AUC (area under curve) and TPR (TPR) are used to indicate the detection performance of the detection defense method against the attack method, and AUC (area under curve) refers to the area under the Roc (receiver operating characteristic) curve, which is between 0.1 and 1. The AUC can visually reflect the detection performance of the detection defense method, and the higher the AUC is, the better the detection performance of the detection defense method is. TPR (true positive rate) refers to the true class rate, and refers to the proportion of the antagonistic sample identified by the detection defense method to all the antagonistic samples. The larger the TPR is, the better the detection performance of the detection defense method is. A (namely, a value on the left of the "\" symbol) in the "a \ b" represents the detection performance of the detection defense method when a traditional confrontation sample is used for confronting attack, b (a value on the right of the "\" symbol) represents the detection performance of the detection defense method when a new confrontation sample is combined for confronting attack on the basis of the traditional confrontation sample, for a and b in the "a \ b", a data set used for obtaining a is the same as a data set used for obtaining b, and a detection defense method used for obtaining a is the same as a detection defense method used for obtaining b. As can be seen from table 1, regardless of whether the data set is a fundus data set or a chest X-ray data set, for AUC and TPR, the value on the left of the "\" symbol is greater than the value on the right of the "\" symbol, that is, the detection performance of the detection defense method on the conventional countermeasure sample is greater than the detection performance of the detection defense method on the conventional countermeasure sample combined with the new countermeasure sample, so that it can be obviously demonstrated that the detection ability of the detection defense method on the countermeasure sample generated by the image processing method of the present application is weaker, and thus it is demonstrated that the countermeasure effect of the countermeasure sample generated by the image processing method of the present application is greater than the attack effect of the countermeasure sample generated by the conventional countermeasure attack method.
For example, in the detection performance corresponding to the fundus data set in table 1, 98.8 in "98.8/72.0" indicates the AUC corresponding to the detection defense method when the detection defense method KD is attacked by the conventional countermeasure sample generated by the conventional countermeasure attack method MIM, and 72.0 in "98.8/72.0" indicates the AUC corresponding to the detection defense method when the detection defense method KD is attacked by the conventional countermeasure sample and the newly countermeasure sample generated by the conventional countermeasure attack method MIM, and 98.8 is greater than 72.0. For another example, 96.3 in "96.3/10.0" indicates that when a conventional countermeasure sample generated by using the conventional countermeasure method MIM attacks the detection defense method KD, the TPR corresponding to the detection defense method, and 10.0 indicates that when the conventional countermeasure sample and a newly countervailed sample generated by using the conventional countermeasure method MIM attack the detection defense method KD, the TPR corresponding to the detection defense method, 96.3 is greater than 10.0. Therefore, the method can be used for reducing the detection performance of the detection defense method KD by combining the countermeasure sample generated by the image processing method extracted by the application on the basis of the countermeasure sample generated by the traditional countermeasure method MIM, so that the countermeasure effect of the countermeasure sample generated by the image processing method of the application is proved to be greater than that of the countermeasure sample generated by the countermeasure method MIM.
The countermeasure sample generated by the image processing method provided by the application can be used for training the detection defense method, so that the detection defense method can correctly identify the countermeasure sample, the detection capability of the detection defense method on the countermeasure sample is improved, and the robustness of the detection defense method on the countermeasure sample is improved.
The image processing method provided by the embodiment can be applied to the field of medical image analysis, can be used for training a detection defense model in the field of medical image analysis, improves the robustness of the detection defense model, can also be used for encrypting medical data, and enables a classifier based on a deep neural network to generate an error result through inseparable image interference of human eyes, so that the data encryption effect is achieved.
The image processing method provided in this embodiment may also be applied to the field of traffic sign recognition, for example, the image recognition model in this application may be a traffic sign classification model, and the traffic sign classification model is used to recognize a traffic sign.
In some embodiments, the terminal may send an image recognition request to the server, where the image recognition request may carry an image to be recognized, the server may recognize the image to be recognized by using the trained image recognition model to obtain an image recognition result corresponding to the image to be recognized, and return the image recognition result to the terminal, and the terminal may display the image recognition result in the interface. For example, as shown in fig. 3B, a schematic diagram of an interface for triggering an image recognition request and displaying an image recognition result in some implementations is shown. The interface includes a picture uploading area 302, a determination result display area 304, and a probability display area 306. The "Xx traffic classifier" identifies the traffic name of the traffic for the image. When a user needs to identify an image, an "upload" button can be clicked, the user enters an image upload interface to select the image, after the image is selected, and after a confirmation operation is received, the terminal can trigger to send an image identification request for a target image category to the server, the server obtains an image to be identified carried in the image identification request, inputs the image to be identified into a trained image identification model corresponding to the target image category, and obtains probabilities that the image to be identified belongs to various image categories, for example, the probability of the image to belong to a "dog" is 0.9766, the probability of the image to belong to a "cat" is 0.3452, and when the probability of the "dog" is greater than a preset probability of 0.85, the server can also output an image identification result that the image includes the dog ".
In some embodiments, the target image is multiple, as shown in fig. 4, performing feature extraction on the target image to obtain a target extraction feature, and obtaining the reference extraction feature based on the target extraction feature includes: s402, extracting the features of each target image to obtain target extraction features respectively corresponding to each target image; s404, acquiring current feature distribution, and calculating the current occurrence probability of each target extraction feature in the current feature distribution; s406, counting the current occurrence probability corresponding to each target extraction feature to obtain a current probability statistic corresponding to each target extraction feature, and adjusting a feature distribution parameter corresponding to the current feature distribution to change the current probability statistic towards a larger direction to obtain target feature distribution; and S408, acquiring the target representative features corresponding to the target feature distribution as reference extraction features.
The current feature distribution may be any feature distribution, and may be a gaussian distribution, for example. The feature distribution parameters corresponding to the current feature distribution are used to determine the current feature distribution, for example, when the current feature distribution is gaussian, the feature distribution parameters may include a mean and a covariance matrix. The current feature distribution includes a likelihood of the occurrence of the feature, and the likelihood of the occurrence of the feature can be determined from the current feature distribution. The current occurrence probability refers to the probability of occurrence of the target extraction feature in the current feature distribution, and may be represented by a probability, where the greater the probability, and the probability may be, for example, 0.3.
The current probability statistic is obtained by counting current occurrence probabilities corresponding to each target extraction feature, and the current probability statistic and the target extraction features form a positive correlation, for example, the target extraction features may be summed to obtain the current probability statistic, or the target extraction features may be multiplied to obtain the current probability statistic, or the target extraction features may be logarithmically calculated to obtain the logarithm extraction features corresponding to the target extraction features, and the logarithm extraction features may be statistically calculated to obtain the current probability statistic, for example, the result of summing the logarithm extraction features may be used as the current probability statistic.
The target representative feature refers to a representative feature in the target feature distribution, and may be, for example, a target center feature of the target feature distribution, and the target center feature refers to a feature that appears most likely in the target feature distribution. The target representative feature may also be a feature in the target feature distribution having a distance from the target center feature that is less than a feature distance threshold.
Specifically, the server may update the feature distribution parameter corresponding to the current feature distribution toward a direction that increases the current probability statistical value, obtain the updated feature distribution parameter, determine whether the update stop condition is reached, if not, return to the step of updating the feature distribution parameter corresponding to the current feature distribution toward a direction that increases the current probability statistical value until the update stop condition is reached, and use the current feature distribution corresponding to the updated feature distribution parameter as the target feature distribution. The update stop condition includes, but is not limited to, that a difference value between the feature distribution parameters of two adjacent times is smaller than a distribution parameter difference threshold, and a difference value between the current likelihood statistics of two adjacent times is smaller than a likelihood statistics difference threshold. The distribution parameter difference threshold and the probability statistic difference threshold may be set as required or preset.
In some embodiments, the current feature distribution may include a plurality of current sub-distributions, a plurality referring to at least two. For each target extraction feature, the server may calculate a current sub-occurrence probability of the target extraction feature in each current sub-distribution, and perform statistical calculation, for example, weighting calculation, on each current sub-occurrence probability to obtain a current sub-occurrence probability corresponding to the target extraction feature, where the current sub-occurrence probability refers to a probability that the target extraction feature occurs in the sub-distribution.
In some embodiments, the sub-distributions correspond to sub-distribution parameters, the characteristic distribution parameters may include sub-distribution parameters corresponding to each current sub-distribution, the server may continuously update the sub-distribution parameters corresponding to each current sub-distribution toward a direction in which the current probability statistical value becomes larger until an update stop condition is reached, use the current sub-distribution corresponding to the updated sub-distribution parameters as target sub-distributions, and obtain target characteristic distributions according to the target sub-distributions, for example, obtain the target characteristic distributions by superimposing the target sub-distributions.
In some embodiments, the server may obtain the representative features corresponding to the target sub-distributions, respectively, to obtain the child table features, determine the reference extraction features according to each target sub-distribution, for example, perform statistical calculation, for example, weighting calculation, to obtain the reference extraction features according to the child table features corresponding to each target sub-distribution, or select the reference extraction features from each child table feature, for example, may calculate the similarity between the confrontation extraction features and the child table features, and use the child representative feature with the largest similarity as the reference extraction feature. The child table feature may be determined as needed, for example, the feature with the highest occurrence probability in the target child distribution.
In some embodiments, the current feature distribution is a gaussian mixture distribution, the gaussian mixture distribution includes two or more sub-distributions, each sub-distribution is a gaussian distribution, and a plurality refers to at least three, and the mixed gaussian distribution can be obtained by overlapping the sub-distributions in the mixed gaussian distribution. The mixture gaussian distribution can be expressed as formula (1). Where x1 denotes an image of which the real image class is the target image class,the feature corresponding to the sample x1 output by the ith layer of the pre-trained image recognition model based on the deep neural network is represented, θ represents the model parameter of the pre-trained image recognition model, and K represents the number of sub-distributions included in the gaussian mixture distribution, which can be set as needed, and may be, for example, 2. PikRepresents the mixing coefficient corresponding to the kth sub-distribution, which can also be called as the weight of the kth sub-distribution, and is not less than 0 and not more than pik≤1, Probability density function representing the kth sub-distribution, i.e. mean value μkThe covariance matrix is ∑kA gaussian distribution of (a).Representing a gaussian mixture distribution. The gaussian mixture distribution may also be referred to as a gaussian mixture model. Mean value μ of the sub-distributionkCan be expressed by formula (2), where N represents the number of samples belonging to the kth sub-distribution, XiDenotes the ith sample, μkMeans of the kth sub-distribution, h (X)i)=γik·Xi,γikRepresents XiThe posterior probability belonging to the kth sub-distribution. Sub-distributed covariance matrix sigmakCan be expressed by formula (3), wherein ∑kA covariance matrix representing the kth sub-distribution.
In some embodiments, the server may obtain an initialized gaussian mixture distribution as the current feature distribution, where the initialized gaussian mixture distribution refers to a sub-distribution parameter of an initialized sub-distribution, and the sub-distribution parameter includes a mean value, a covariance matrix, and a weight of the sub-distribution, that is, an initial value is set for the mean value, the covariance matrix, and the weight of each sub-distribution in the gaussian mixture distribution. The server can calculate the probability of each target extraction feature in the initialized Gaussian mixture distribution, as the current occurrence probability of each target extraction feature, calculate the product of each current occurrence probability as the current probability statistical value, or calculate the sum of the logarithmic results of each current occurrence probability to obtain the current probability statistical value, iteratively adjust the sub-distribution parameters in the direction of increasing the current probability statistical value until the change value of the current probability statistical value is smaller than the statistical value change threshold, or the change of the sub-distribution parameters is smaller than the sub-parameter change threshold, and use the Gaussian mixture distribution formed by the sub-distributions determined by the adjusted sub-distribution parameters as the target feature distribution. The statistical value change threshold and the sub-parameter change threshold may be preset.
In this embodiment, the distribution parameters corresponding to the current feature distribution are adjusted in a direction that the current probability statistic value is increased, so that the current feature distribution is continuously close to the real distribution condition that is satisfied by the features of the image of the target image category, the target feature distribution can accurately reflect the real distribution condition that is satisfied by the features of the image of the target image category, the target representative features corresponding to the target feature distribution are obtained and used as the reference extraction features, the features representative in the target feature distribution are used as the reference extraction features, and the accuracy of the reference extraction features is improved.
In some embodiments, the current feature distribution includes a first current feature distribution and a second current feature distribution, and the current occurrence probability includes a first current occurrence probability corresponding to the first current feature distribution and a second current occurrence probability corresponding to the second current feature distribution; counting the current occurrence probability corresponding to each target extraction feature to obtain a current probability statistic value, adjusting the feature distribution parameters corresponding to the current feature distribution to make the current probability statistic value change towards a direction of increasing, and obtaining the target feature distribution comprises: acquiring a first current distribution weight corresponding to the first current characteristic distribution and a second current distribution weight corresponding to the second current characteristic distribution; based on the first current distribution weight and the first current occurrence probability, carrying out weighted summation on the second current distribution weight and the second current occurrence probability to obtain the current occurrence probability corresponding to the target extraction features; and counting the current occurrence probability corresponding to each target extraction feature to obtain a current probability statistic value, and adjusting the distribution parameters, the first current distribution weight and the second current distribution weight corresponding to the first current feature distribution and the second current feature distribution to change the current probability statistic value towards the increasing direction to obtain a first target feature distribution, a second target feature distribution, a first target distribution weight and a second target distribution weight.
The first current feature distribution and the second current feature distribution may be the same type of probability distribution, and may be gaussian distributions, for example. The first current occurrence probability refers to a probability of occurrence of the target extraction feature in the first current feature distribution, and the second current occurrence probability refers to a probability of occurrence of the target extraction feature in the second current feature distribution.
The first current distribution weight represents a degree of likelihood that the target extraction feature belongs to the first current feature distribution, and the second current distribution weight represents a degree of likelihood that the target extraction feature belongs to the second current feature distribution. The sum of the weights corresponding to the distributions included in the current feature distribution is equal to 1, for example, when only the first current feature distribution and the second current feature distribution are included in the current feature distribution, the sum of the first current distribution weight and the second current distribution weight is 1. The initial values of the first current distribution weight and the second current distribution weight may be preset, and the initial values may be, for example, 0.5 and 0.5, respectively.
The target feature distribution includes a first target feature distribution and a second target feature distribution. The first target feature distribution, the second target feature distribution, the first target distribution weight and the second target distribution weight are respectively distribution parameters, the first current distribution weight and the second current distribution weight corresponding to the first current feature distribution and the second current feature distribution when the update stop condition is reached.
Specifically, the server may calculate a product of the first current distribution weight and the first current occurrence probability to obtain a first result, calculate a product of the second current distribution weight and the second current occurrence probability to obtain a second result, and sum the first result and the second result to obtain the current occurrence probability corresponding to the target extraction feature.
In some embodiments, the server may adjust the distribution parameters, the first current distribution weight, and the second current distribution weight corresponding to the first current feature distribution and the second current feature distribution toward the direction that the current probability statistic value becomes larger until the update stop condition is reached, and use the updated first current feature distribution as the first target feature distribution, the updated second current feature distribution as the second target feature distribution, the updated first current distribution weight as the first target distribution weight, and the updated second current distribution weight as the second target distribution weight. The update stop condition may further include that a difference between the current distribution weights of the adjacent two times is less than a weight difference threshold.
In this embodiment, the distribution parameters, the first current distribution weight, and the second current distribution weight corresponding to the first current feature distribution and the second current feature distribution are adjusted toward the direction that the current probability statistic value becomes larger, so as to obtain a first target feature distribution, a second target feature distribution, a first target distribution weight, and a second target distribution weight, so that the target feature distribution includes the first target feature distribution and the second target feature distribution, and the accuracy of the target feature distribution can be improved by better reflecting the distribution condition satisfied by the features of the image of the target image type through two distributions.
In some embodiments, as shown in fig. 5, obtaining the target representative feature corresponding to the target feature distribution, and obtaining the reference extracted feature includes: s502, determining a first target occurrence probability of the confrontation extraction features in the first target feature distribution, and obtaining a first weighted probability based on the first target distribution weight and the first target occurrence probability; s504, determining a second target occurrence probability of the confrontation extraction features in the second target feature distribution, and obtaining a second weighted probability based on the second target distribution weight and the second target occurrence probability; s506, based on the first weighted likelihood and the second weighted likelihood, selecting the feature distribution with the maximum likelihood from the first target feature distribution and the second target feature distribution as a representative feature distribution; and S508, acquiring the target representative features corresponding to the representative feature distribution as reference extraction features.
The first target appearance possibility degree refers to the probability of the occurrence of the confrontation extraction feature in the first target feature distribution, and the second target appearance possibility degree refers to the probability of the occurrence of the confrontation extraction feature in the second target feature distribution. The representative feature distribution may be any one of the first target feature distribution or the second target feature distribution. The target representative feature refers to a representative feature corresponding to the representative feature distribution, for example, the feature that can appear most probably in the representative feature distribution.
Specifically, the server may calculate a product of a first target distribution weight and a first target occurrence probability to obtain a first weighted probability, calculate a product of a second target distribution weight and a second target occurrence probability to obtain a second weighted probability, compare the first weighted probability with the second weighted probability, determine a greater weighted probability of the first weighted probability and the second weighted probability as a target weighted probability, and use a feature distribution corresponding to the target weighted probability as a representative feature distribution.
In some embodiments, the server may obtain a first image loss value based on the feature difference value, obtain a target image loss value according to the first image loss value, where the target image loss value is in a positive correlation with the first image loss value, and the first image loss value is in a positive correlation with the feature difference value.
In some embodiments, the target feature distribution may be a gaussian mixture distribution, the first target feature distribution and the second target feature distribution are respectively sub-distributions in the gaussian mixture distribution, the server may determine one sub-distribution from the sub-distributions in the gaussian mixture distribution as the representative feature distribution, for example, the server may determine probabilities of the anti-extraction features appearing in the respective sub-distributions, obtain a probability set, and use the sub-distribution corresponding to the highest probability in the probability set as the sub-distribution in the probability setRepresenting the characteristic distribution. For example, formula (4) may be adopted to determine the representative feature distribution, where formula (4) indicates that the probability of the confrontation extraction feature appearing in the kth sub-distribution of the gaussian mixture distribution is greater than the probability of the confrontation extraction feature appearing in other sub-distributions, that is, the kth sub-distribution is the sub-distribution closest to the current confrontation image, that is, the kth sub-distribution may be taken as the representative feature distribution. x2 refers to the current confrontation image,representing the challenge extracted features output by the current challenge image x2 at the ith level of the pre-trained image recognition model.
In some embodiments, when the representative feature distribution is a sub-distribution of a gaussian mixture distribution, the server may determine a degree of occurrence of the confrontation extraction feature in the sub-distribution corresponding to the representative feature distribution, and adjust the pixel value of the current confrontation image toward a direction in which the degree of occurrence increases, to obtain a target confrontation image corresponding to the target image. The degree of possibility of occurrence of the counterextraction features in the sub-distribution corresponding to the representative feature distribution may be, for example, that in formula (5) Is the mean of the k' th sub-distribution,is a covariance matrix representing the k' th sub-distribution. Since the first two terms in equation (5) are constant, the direction is towardAnd adjusting the pixel value of the current confrontation image in the increasing direction to obtain a target confrontation image corresponding to the target image.
In some embodiments, the reference extraction feature may be a mean value representing a sub-distribution corresponding to the feature distribution, and the server may calculate a layer image loss value corresponding to the confrontation extraction feature of the ith layer of the pre-trained image recognition model using formula (6), and count the layer image loss values corresponding to the respective feature extraction layers to obtain a first image loss value, which may be represented by formula (7). Wherein, JlA layer image loss value, λ, corresponding to the antagonistic extraction feature representing the l-th layerlThe weight of the l-th layer can be set as needed. J. the design is a squareHFCL is more than or equal to 1 and less than or equal to L, and L is more than or equal to 1 and less than or equal to L. The server may adjust the pixel values of the current confrontation image towards a direction of decreasing first image loss value, i.e., towards a direction of decreasing difference of the current extracted feature from the reference extracted feature, may pull the distance of the current confrontation image from the target image of the target image class in the high-dimensional feature space.
In some embodiments, normal sample data may be collected and the gaussian mixture model may be trained on the offline side. And deploying a Gaussian mixture model at the server side, inputting a normal sample and outputting a countermeasure sample.
In this embodiment, the feature distribution with the largest probability is selected from the first target feature distribution and the second target feature distribution as the representative feature distribution, so that the probability of occurrence of the counterdraw feature in the representative feature distribution is greater than the probability of occurrence of the counterdraw feature in the non-representative feature distribution, thereby obtaining the target representative feature corresponding to the representative feature distribution, which is used as the reference draw feature, and thus, the convergence speed of the interference adjustment value can be increased, which is more favorable for obtaining the target counterdraw image under the condition that the interference adjustment value is smaller, and the efficiency of generating the target counterdraw image is increased. In addition, the feature difference value and the target image loss value are in a positive correlation relationship, namely when the target image loss value is reduced, the feature difference value is reduced, and the difference between the confrontation extraction feature and the reference extraction feature is reduced, so that the difference between the feature of the confrontation sample and the image feature of the target image class can be reduced, and the feature of the confrontation sample is hidden in the distribution met by the feature of the image of the target image class.
In some embodiments, deriving the target image loss value based on the feature difference values comprises: obtaining a first image loss value based on the characteristic difference value, wherein the characteristic difference value and the first image loss value form a positive correlation; determining a first confidence of the current confrontation image in the target image category based on the confrontation extraction features; obtaining a second image loss value based on the first confidence coefficient; the second image loss value is in a negative correlation relation with the first confidence coefficient; and obtaining a target image loss value based on the first image loss value and the second image loss value.
Wherein the first confidence degree refers to the probability that the current confrontation image belongs to the target image category. The second image loss value is in a negative correlation with the first confidence, and the inverse or reciprocal of the first confidence may be used as the second image loss value. The target image loss value is in positive correlation with the first image loss value and the second image loss value.
Specifically, the server may input the current confrontation image into the pre-trained image recognition model, and obtain an image recognition result output by the pre-trained image recognition model, where the image recognition result may include that the current confrontation image belongs to a target image class, and may further include a reference confidence that the current confrontation image belongs to a reference image class, and the reference image class may be any class other than the target image class, and may include at least one of the confrontation image class and the image class other than the confrontation image class. The reference confidence refers to the confidence that the current confrontation image belongs to the reference image category, and the server may obtain the second image loss value based on the first confidence and the reference confidence, for example, the second image loss value may be obtained based on the difference between the first confidence and the reference confidence.
In some embodiments, the server may perform a weighted calculation of the first image loss value and the second image loss value, and take the result of the weighted calculation as the target image loss value, or may take the result of adding the first image loss value and the second image loss value as the target image loss value.
In this embodiment, the second image loss value and the first confidence level have a negative correlation, and when the second image loss value changes toward a decreasing direction, the first confidence level changes toward an increasing direction, so that the probability that the current confrontation image is recognized as the target image class increases, and the probability that the current confrontation image becomes the confrontation sample of the target image class increases.
In some embodiments, deriving the second image loss value based on the first confidence level comprises: determining a second confidence level of the current confrontation image in the confrontation image category based on the confrontation extraction features; obtaining a second image loss value based on a first confidence difference value between the first confidence and the second confidence; the second image loss value is in a negative correlation with the first confidence difference value.
Wherein the second confidence level refers to a probability that the current confrontation image belongs to the confrontation image category. The first confidence difference value may be in a positive correlation with a result obtained by subtracting the second confidence from the first confidence, for example, the result obtained by subtracting the second confidence from the first confidence. The second image loss value is inversely related to the first confidence level.
Specifically, the server may use the inverse number or the reciprocal of the first confidence difference value as the second image loss value, or perform scaling processing on the first confidence difference value, and use the inverse number or the reciprocal corresponding to the result of the scaling processing as the second image loss value.
In this embodiment, the second image loss value and the first confidence difference value are in a negative correlation relationship, so that the difference between the first confidence and the second confidence is reduced, and the probability that the current confrontation image is recognized as the first image category is improved.
In some embodiments, deriving the target image loss value based on the feature difference values comprises: obtaining a first image loss value based on the characteristic difference value, wherein the characteristic difference value and the first image loss value form a positive correlation; determining reference confidence degrees of the current confrontation image in various reference image categories based on the confrontation extraction features; selecting the maximum confidence coefficient from the reference confidence coefficients as a third confidence coefficient; obtaining a third image loss value based on a second confidence difference value between the first confidence and the third confidence; and obtaining a target image loss value based on the first image loss value and the third image loss value.
And the third confidence coefficient is the maximum confidence coefficient in the reference confidence coefficients. The second confidence difference value refers to a difference between the first confidence and the third confidence, and may be in a positive correlation with a result obtained by subtracting the third confidence from the first confidence, for example, a result obtained by subtracting the third confidence from the first confidence. The third image loss value is in a negative correlation with the second confidence difference value. The third image loss value is inversely related to the first confidence level.
Specifically, the server may perform a statistical calculation, for example, a weighted calculation, on the first image loss value and the third image loss value to obtain a target image loss value, where the target image loss value is in a positive correlation with the first image loss value and the third image loss value, and the server may perform a statistical calculation, for example, a weighted calculation, on the first image loss value, the second image loss value, and the third image loss value to obtain a target image loss value.
In this embodiment, the maximum confidence level is selected from the reference confidence levels, the third confidence level is used as the third confidence level, the third image loss value is obtained based on the second confidence level difference value between the first confidence level and the third confidence level, when the third image loss value changes towards the decreasing direction, the second confidence level difference value can change towards the increasing direction, that is, the first confidence level changes towards the increasing direction, and the third confidence level changes towards the decreasing direction, so that the first confidence level can be greater than the third confidence level, that is, the first confidence level can be the maximum confidence level among the confidence levels included in the image recognition result, and the current confrontation image can be recognized as the first image category.
In some embodiments, adjusting the pixel value of the current confrontation image based on the interference adjustment value to obtain a target confrontation image corresponding to the target image includes: when the second confidence difference value is smaller than the confidence difference threshold value, adjusting the pixel value of the current confrontation image based on the interference adjustment value to obtain an updated current confrontation image; and returning to the step of extracting the features of the current confrontation image to obtain the confrontation extracted features until the second confidence difference value reaches the confidence difference threshold value, and taking the current confrontation image as the target confrontation image corresponding to the target image.
Specifically, the confidence difference threshold may be set as needed or may be set in advance. The second confidence difference value may be a result obtained by subtracting the third confidence from the first confidence, the pixel value adjustment end condition may further include that the second confidence difference value reaches a confidence difference threshold, when the second confidence difference value reaches a preset value, the server may determine that the pixel value adjustment end condition is satisfied, stop the step of adjusting the pixel value of the current confrontation image based on the interference adjustment value, and use the current confrontation image when the pixel value adjustment end condition is satisfied as the target confrontation image. For example, the third image loss value may be expressed as formula (8), and the target image loss value may be expressed as formula (9), where JfinalRepresenting the target image loss value, JHFCRepresenting a first image loss value, JCWRepresenting a third image loss value.Representing a second confidence difference value, kk representing a confidence difference threshold,a first degree of confidence is indicated in the first image,indicating the current confrontation image,a third confidence level is indicated.
Jfinal=JHFC+JCW (9)
In this embodiment, when the second confidence difference value is smaller than the confidence difference threshold, the pixel value of the current confrontation image is adjusted based on the interference adjustment value to obtain an updated current confrontation image, the step of performing feature extraction on the current confrontation image to obtain confrontation extraction features is returned until the second confidence difference value reaches the confidence difference threshold, the current confrontation image is used as a target confrontation image corresponding to the target image, and the degree that the first confidence is higher than the third confidence can be controlled, so that the first confidence is not too high than the third confidence, and the overfitting condition is reduced.
In some embodiments, adjusting the pixel value of the current confrontation image based on the interference adjustment value to obtain a target confrontation image corresponding to the target image includes: adjusting the pixel value of the current confrontation image based on the interference adjustment value to obtain an updated current confrontation image; and calculating a current image difference value between the updated current countermeasure image and the original countermeasure image corresponding to the current countermeasure image, returning to the step of performing feature extraction on the current countermeasure image when the current image difference value is smaller than an image difference threshold value to obtain countermeasure extraction features until the current image difference value reaches the image difference threshold value, and taking the current countermeasure image as a target countermeasure image corresponding to the target image.
Wherein the original challenge image has not been subjected to a challenge attack. The current challenge image may be obtained by adding a perturbation invisible to the human eye, such as modifying pixel values, to the original challenge image. The current image difference value refers to a difference between the updated current countermeasure image and the original countermeasure image, for example, a difference between pixel values of each pixel between the current countermeasure image and the original countermeasure image may be obtained, a pixel difference value corresponding to each pixel may be obtained, each pixel difference value is counted, and the current image difference value may be obtained, for example, a result of weighted calculation of each pixel difference value may be used as the current image difference value, or a P norm of a vector formed by each pixel difference value may be used as the current image difference value, and P may be set as needed, for example, may be 3. The current image disparity value may also be a difference between a feature of the updated current countermeasure image and a feature of the original countermeasure image.
Specifically, the server may superimpose the interference adjustment value as an updated pixel value on the basis of the pixel value of the current confrontation image to obtain the updated current confrontation image, where the interference adjustment value may be one or more, and a plurality refers to at least two. When there is only one interference adjustment value, each pixel value in the current confrontation image may be adjusted identically by the interference adjustment value, for example, when the interference adjustment value is 0.1, 0.1 may be added to each pixel value in the current confrontation image, so as to obtain an updated pixel value.
In some embodiments, the pixel value adjustment end condition may further include that the current image difference value reaches an image difference threshold, the server may calculate a difference value between the updated current countermeasure image and an original countermeasure image corresponding to the current countermeasure image to obtain the current image difference value, when the current image difference value is smaller than the image difference threshold, the step of performing feature extraction on the current countermeasure image is returned to obtain the countermeasure extraction feature, until the current image difference value reaches the image difference threshold, and the current countermeasure image is used as a target countermeasure image corresponding to the target image.
In some embodiments, there may be interference adjustment values corresponding to a plurality of pixel points, and each interference adjustment value may be the same or different, for example, the interference adjustment value corresponding to one pixel point is-0.1, and the interference adjustment value corresponding to one pixel point is 0.1. The server can adjust the corresponding pixel points in the current confrontation image based on the interference adjustment values respectively corresponding to the pixel points to obtain the updated current confrontation image.
In some embodiments, the current image disparity value may utilize the formula | | xadv-x||p≦ ε (10) is calculated, where ε represents the image difference threshold, which may also be referred to as the interference budget. | xadv-x||pDenotes xadv-p-norm of x. x is the number ofadvRepresenting the current challenge image and x representing the original challenge image.
The difference of the image difference threshold values is different, the obtained target confrontation images are also different, the obtained confrontation samples are different as shown in fig. 6, and the confrontation samples corresponding to the different image difference threshold values are shown, the first line of the 1 st image is an image of a normal sample, the image is from a chest X-ray data set, the first line of the 2 nd image to the 5 th image is the confrontation sample generated when the image difference threshold values are 0.5,1,2,4 and 8, respectively, the second line of the 1 st image is an image of a normal sample, the image is from a fundus data set, and the first line of the 2 nd image to the 5 th image is the confrontation sample generated when the image difference threshold values are 0.5,1,2,4 and 8, respectively. As can be seen from fig. 6, the difference between the challenge sample and the normal sample cannot be found clearly by the human eye, but for the image recognition model, as the image difference threshold value is increased, the discrimination capability between the challenge sample and the normal sample by the image recognition model is higher and higher.
In this embodiment, a current image difference value between the current countermeasure image before and after updating and the current countermeasure image after updating of the original countermeasure image corresponding to the current countermeasure image is calculated, when the current image difference value is smaller than an image difference threshold value, the step of performing feature extraction on the current countermeasure image is returned to obtain the countermeasure extraction features, until the current image difference value is larger than the image difference threshold value, the current countermeasure image is used as the target countermeasure image corresponding to the target image, the difference between the current countermeasure image before and after updating and the original countermeasure image can be controlled, so that the current countermeasure image before and after updating and the original countermeasure image cannot be distinguished by naked eyes, and the accuracy of the target countermeasure image is improved.
In some embodiments, the interference adjustment value includes a pixel adjustment value corresponding to each pixel point, and adjusting the pixel value of the current confrontation image based on the interference adjustment value to obtain the target confrontation image corresponding to the target image includes: deriving a target image loss value based on pixel values of all pixel points of the current confrontation image to obtain pixel adjustment values corresponding to all the pixel points in the current confrontation image; and adjusting the pixel values of the pixel points in the current confrontation image based on the pixel adjustment values respectively corresponding to the pixel points to obtain a target confrontation image corresponding to the target image.
Specifically, the server may derive pixel values of pixel points in the current confrontation image by using the target image loss value to obtain pixel adjustment values corresponding to the respective pixel points, adjust the pixel values of the corresponding pixel points by using the pixel adjustment values corresponding to the pixel points to obtain an adjusted current confrontation image, and obtain the target confrontation image according to the adjusted current confrontation image.
In some embodiments, the server may utilize the target image loss value to derive the pixel values of the pixels in the current confrontation image to obtain derivative values corresponding to the pixels, determine a pixel adjustment coefficient according to the derivative values, obtain a pixel adjustment amplitude, and calculate a product of the pixel adjustment coefficient and the pixel adjustment amplitude to obtain a pixel adjustment value corresponding to the pixel. The adjustment range of the pixel point is the maximum pixel value that can be increased or decreased when the pixel value is adjusted each time, and may be preset, for example, 1, where the pixel value of the pixel point is 90, and the range of the adjusted pixel value is 89 to 91. The server may compare the derivative value with a numerical threshold, and determine a pixel point adjustment coefficient according to a comparison result, where when the derivative value is greater than the numerical threshold, the pixel point adjustment coefficient corresponding to the derivative value is determined to be a first numerical value, and when the derivative value is less than the numerical threshold, the pixel point adjustment coefficient corresponding to the derivative value is determined to be a second numerical value, the numerical threshold may be set as needed, for example, the numerical threshold may be 0, the first numerical value is greater than the second numerical value, the first numerical value is, for example, 1, and the second numerical value is, for example, 1.
In this embodiment, the pixel values of the pixels in the current confrontation image are adjusted based on the pixel adjustment values respectively corresponding to the pixels, so as to obtain the target confrontation image corresponding to the target image, thereby realizing the pixel value adjustment of each pixel, and improving the flexibility and accuracy of adjusting the pixel values.
In some embodiments, the method further comprises: inputting the target countermeasure image into an image recognition model to be trained corresponding to the target image category to obtain the countermeasure confidence corresponding to the target image category of the target countermeasure image; determining a target model loss value based on the confrontation confidence; the target model loss value and the confrontation confidence degree form a positive correlation; and adjusting model parameters of the image recognition model based on the loss value of the target model to obtain the trained image recognition model.
The image recognition model to be trained corresponding to the target image category refers to a pre-trained image recognition model. The confrontation confidence is the confidence that the target confrontation image output by the image recognition model belongs to the target image category. Model parameters refer to variable parameters within the model, which for neural network models may also be referred to as neural network weights (weights).
Specifically, the server may use the target countermeasure image as a negative sample of the image recognition model to be trained, train the image recognition model to be trained, and obtain a target image whose true image category is the target image category, use the target image as a positive sample of the image recognition model to be trained, train the image recognition model to be trained by using the positive sample and the negative sample, and obtain the trained image recognition model. In each training process, the number of the positive samples and the number of the negative samples can be determined according to needs.
In some embodiments, the server may train the image recognition model in an iterative training manner, and specifically, the server may adjust model parameters of the image recognition model toward a direction in which a loss value of the target model decreases, perform iterative training until a model convergence condition is satisfied, and use the image recognition model after the model parameters are adjusted as the trained image recognition model. The model convergence condition includes, but is not limited to, whether the change of the target model loss value is smaller than the preset loss value or the change of the modulus parameter is smaller than the preset parameter change value.
In this embodiment, the target model loss value is determined based on the countermeasure confidence, and since the target model loss value and the countermeasure confidence are in a positive correlation relationship, when the target model loss value decreases, the countermeasure confidence also decreases, that is, the possibility that the target countermeasure image is recognized as the target image category decreases, so that the possibility that the trained image recognition model recognizes the target countermeasure image as the target image category may be reduced, the robustness of the image recognition model to the countermeasure sample may be improved, and the reliability of the image recognition model may be improved.
In some embodiments, as shown in fig. 7, there is provided an image recognition method comprising the steps of:
702. and acquiring a target image of the target image class and a current confrontation image corresponding to the confrontation image class.
704. The method comprises the steps of extracting features of all target images to obtain target extraction features corresponding to all the target images, obtaining current feature distribution, calculating current appearance possibility of all the target extraction features in the current feature distribution, counting the current appearance possibility corresponding to all the target extraction features to obtain a current possibility statistic value, and adjusting feature distribution parameters corresponding to the current feature distribution to enable the current possibility statistic value to change towards the increasing direction to obtain the target feature distribution.
The target feature distribution may include a first target feature distribution and a second target feature distribution, where the first target feature distribution corresponds to a first target distribution weight, and the second target feature distribution corresponds to a second target distribution weight, and the first target feature distribution and the second target feature distribution are weighted and calculated based on the first target distribution weight and the second target distribution weight, so as to obtain the target feature distribution, that is, the target feature distribution is obtained by superimposing the first target feature distribution and the second target feature distribution.
When the image recognition model is a neural network model, a Gaussian mixture model can be used for carrying out normal distribution modeling on each activation layer of the image recognition model to obtain target feature distribution, and the countermeasure samples are fitted to normal samples in the feature space by maximizing the logarithm likelihood of the countermeasure samples.
706. And performing feature extraction on the current confrontation image to obtain confrontation extraction features.
708. Determining a first target occurrence probability of the confrontation extraction features in the first target feature distribution, obtaining a first weighted probability based on the first target distribution weight and the first target occurrence probability, determining a second target occurrence probability of the confrontation extraction features in the second target feature distribution, and obtaining a second weighted probability based on the second target distribution weight and the second target occurrence probability.
710. And selecting the characteristic distribution with the maximum possibility from the first target characteristic distribution and the second target characteristic distribution based on the first weighted possibility and the second weighted possibility as a representative characteristic distribution, and acquiring a target representative characteristic corresponding to the representative characteristic distribution as a reference extraction characteristic.
712. And calculating a feature difference value between the confrontation extraction feature and the reference extraction feature, and obtaining a target image loss value based on the feature difference value.
Wherein, the characteristic difference value and the target image loss value are in positive correlation.
714. Determining a first confidence degree of the current confrontation image in the target image category based on the confrontation extraction features, determining a second confidence degree of the current confrontation image in the confrontation image category based on the confrontation extraction features, and obtaining a second image loss value based on a first confidence degree difference value between the first confidence degree and the second confidence degree.
And the second image loss value and the first confidence difference value are in a negative correlation relationship.
716. And obtaining a target image loss value based on the first image loss value and the second image loss value, and obtaining an interference adjustment value based on the target image loss value.
718. It is determined whether the second confidence level difference value is less than the confidence level difference threshold and the current image difference value is less than the image difference threshold, if yes, step 720 is performed, otherwise (i.e., the second confidence level difference value reaches the confidence level difference threshold or the current image difference value reaches the image difference threshold), step 722 is performed.
720. Adjusting the pixel value of the current confrontation image based on the interference adjustment value to obtain an updated current confrontation image, performing feature extraction on the updated current confrontation image to obtain an updated confrontation extraction feature, and returning to the step 712.
722. And taking the current confrontation image as a target confrontation image corresponding to the target image.
The server can adjust the pixel value of the current confrontation image based on the interference adjustment value to obtain an updated current confrontation image, and calculate the difference value between the updated current confrontation image and the original confrontation image corresponding to the current confrontation image to obtain the difference value of the current image.
724. And inputting the target countermeasure image into an image recognition model to be trained corresponding to the target image category to obtain the countermeasure confidence corresponding to the target image category of the target countermeasure image.
The target confrontation image may also be referred to as a confrontation sample, among others.
726. And determining a target model loss value based on the countermeasure confidence, adjusting model parameters of the image recognition model based on the target model loss value to obtain a trained image recognition model, and performing image recognition by using the trained image recognition model.
Wherein the target model loss value is positively correlated with the confrontation confidence.
In this embodiment, the countermeasure samples in the feature spaces of different layers of the image recognition model are simultaneously constrained, that is, the distribution of the countermeasure samples in the feature spaces is constrained, so that the countermeasure samples are fitted in the distribution of the normal samples of the target image category, and therefore, the robustness of the image recognition model to the countermeasure samples can be improved and the reliability of the image recognition model is improved by training the image recognition model with the countermeasure samples.
The image processing method provided by the application can be applied to the countermeasure training of a neural network model, for example, the image processing method can be applied to the countermeasure training of a neural network model in the medical field, for example, a pre-trained image recognition model is a model for recognizing an object in a medical image, the medical image can include a normal class of medical images and an abnormal class of medical images, the normal class of medical images can be lung images without pneumonia, the abnormal class of medical images can be lung images with pneumonia, and a server can train an initial image recognition model by using the normal medical images and the abnormal medical images to obtain the pre-trained image recognition model capable of correctly recognizing whether the medical image is a normal class or an abnormal class. The server can extract the features of the medical images of the normal category to obtain normal extraction features, extract the features of the medical images of the abnormal category to obtain abnormal extraction features, adjust the medical images of the normal category towards the direction that the normal extraction features are close to the abnormal extraction features to obtain the adjusted medical images, and enable the adjusted medical images to be recognized as abnormal categories by the pre-trained image recognition model. The server can obtain the adjusted medical image, the adjusted medical image is used for carrying out countertraining on the pre-trained image recognition model, the pre-trained image recognition model is adjusted to enable the adjusted medical image to be recognized as a normal category by the pre-trained image recognition model, namely, the pre-trained image recognition model can correctly recognize the adjusted image, so that the recognition accuracy of the medical image is improved, a lawless person can be prevented from modifying the normal medical image into an abnormal medical image, and the situation that insurance compensation is applied by the modified medical image is avoided.
The image processing method provided by the application can be applied to computer vision tasks, such as segmentation tasks or detection tasks, and can be applied to encryption of images including objects to be detected, so as to ensure the safety of the images. The server can store the interference adjustment value of the target counterimage relative to the original image, namely, can store the relationship between the target counterimage and the interference adjustment value, and when the encrypted image needs to be identified, the server can remove the interference adjustment value in the target counterimage based on the target counterimage and the interference adjustment value to obtain the original image, and perform image identification based on the original image. For example, the object to be detected can be a fundus image, and the fundus image is adjusted by obtaining the interference adjustment value of the fundus image, so that the safety of the fundus image can be ensured, and the fundus image is prevented from being stolen by a malicious attacker and then is illegally used. The server may store the interference adjustment value of the target countermeasure image with respect to the original image, and when the image needs to be recognized, for example, when an abnormal region in the fundus image is recognized, the adjustment interference adjustment value in the target countermeasure image may be removed and recognition may be performed.
The image processing method provided by the application can be deployed under various operating systems, for example, can be used in Linux, Windows and MacOS environments, and can also be embedded into a deep learning-based classification model.
It should be understood that although the various steps in the flow charts of fig. 2-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-7 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In some embodiments, as shown in fig. 8, there is provided an image processing apparatus, which may be a part of a computer device using a software module or a hardware module, or a combination of the two, the apparatus specifically includes: a current confrontation image obtaining module 802, a reference extraction feature obtaining module 804, a confrontation extraction feature obtaining module 806, a target image loss value obtaining module 808, and a target confrontation image obtaining module 810, wherein:
a current confrontation image obtaining module 802, configured to obtain a target image of a target image category and a current confrontation image corresponding to the confrontation image category.
A reference extraction feature obtaining module 804, configured to perform feature extraction on the target image to obtain a target extraction feature, and obtain a reference extraction feature based on the target extraction feature.
And a confrontation extraction feature obtaining module 806, configured to perform feature extraction on the current confrontation image to obtain a confrontation extraction feature.
A target image loss value obtaining module 808, configured to calculate a feature difference value between the countermeasure extraction feature and the reference extraction feature, and obtain a target image loss value based on the feature difference value; the characteristic difference value and the target image loss value are in positive correlation.
And the target confrontation image obtaining module 810 is configured to obtain an interference adjustment value based on the target image loss value, and adjust a pixel value of the current confrontation image based on the interference adjustment value to obtain a target confrontation image corresponding to the target image.
In some embodiments, the target image is multiple, and the reference extracted feature obtaining module 804 includes:
and the target extraction feature obtaining unit is used for extracting features of each target image to obtain target extraction features respectively corresponding to each target image.
And the current appearance possibility calculation unit is used for acquiring the current feature distribution and calculating the current appearance possibility of each target extraction feature in the current feature distribution.
And the target feature distribution obtaining unit is used for counting the current occurrence probability corresponding to each target extraction feature to obtain a current probability statistic value, and adjusting the feature distribution parameters corresponding to the current feature distribution to change the current probability statistic value towards the increasing direction to obtain the target feature distribution.
And the reference extraction feature obtaining unit is used for obtaining the target representative features corresponding to the target feature distribution to obtain the reference extraction features.
In some embodiments, the current feature distribution includes a first current feature distribution and a second current feature distribution, and the current occurrence probability includes a first current occurrence probability corresponding to the first current feature distribution and a second current occurrence probability corresponding to the second current feature distribution; the target feature distribution obtaining unit is further configured to obtain a first current distribution weight corresponding to the first current feature distribution and a second current distribution weight corresponding to the second current feature distribution; based on the first current distribution weight and the first current occurrence probability, carrying out weighted summation on the second current distribution weight and the second current occurrence probability to obtain the current occurrence probability corresponding to the target extraction features; and counting the current occurrence probability corresponding to each target extraction feature to obtain a current probability statistic value, and adjusting the distribution parameters, the first current distribution weight and the second current distribution weight corresponding to the first current feature distribution and the second current feature distribution to change the current probability statistic value towards the increasing direction to obtain a first target feature distribution, a second target feature distribution, a first target distribution weight and a second target distribution weight.
In some embodiments, the reference extracted feature obtaining unit is further configured to determine a first target occurrence probability of the confrontation extracted feature in the first target feature distribution, and obtain a first weighted likelihood based on the first target distribution weight and the first target occurrence probability; determining a second target occurrence probability of the confrontation extraction features in the second target feature distribution, and obtaining a second weighted probability based on the second target distribution weight and the second target occurrence probability; selecting the characteristic distribution with the maximum possibility from the first target characteristic distribution and the second target characteristic distribution as a representative characteristic distribution based on the first weighted possibility and the second weighted possibility; and acquiring the target representative features corresponding to the representative feature distribution as reference extraction features.
In some embodiments, the target image loss value derivation module 808 includes:
the first image loss value obtaining unit is used for obtaining a first image loss value based on the feature difference value, and the feature difference value and the first image loss value form a positive correlation relationship.
And the first confidence coefficient determining unit is used for determining the first confidence coefficient of the current confrontation image in the target image category based on the confrontation extraction features.
A second image loss value obtaining unit configured to obtain a second image loss value based on the first confidence; the second image loss value is inversely related to the first confidence level.
And a target image loss value obtaining unit configured to obtain a target image loss value based on the first image loss value and the second image loss value.
In some embodiments, the second image loss value obtaining unit is further configured to determine a second confidence level that the current confrontation image is in the confrontation image category based on the confrontation extraction features; obtaining a second image loss value based on a first confidence difference value between the first confidence and the second confidence; the second image loss value is in a negative correlation with the first confidence difference value.
In some embodiments, the target image loss value derivation module 808 includes:
the first image loss value obtaining unit is used for obtaining a first image loss value based on the characteristic difference value, and the characteristic difference value and the first image loss value form a positive correlation relationship.
And the reference confidence determining unit is used for determining the reference confidence of the current confrontation image in each reference image category based on the confrontation extraction features.
And the third confidence coefficient obtaining unit is used for selecting the maximum confidence coefficient from the reference confidence coefficients to serve as a third confidence coefficient.
A third image loss value obtaining unit, configured to obtain a third image loss value based on a second confidence difference value between the first confidence and the third confidence; the third image loss value is in a negative correlation with the second confidence difference value.
And a target image loss value obtaining unit configured to obtain a target image loss value based on the first image loss value and the third image loss value.
In some embodiments, target confrontation image derivation module 810 includes:
and the updated current confrontation image obtaining unit is used for adjusting the pixel value of the current confrontation image based on the interference adjustment value to obtain an updated current confrontation image when the second confidence difference value is smaller than the confidence difference threshold value.
And the target countermeasure image obtaining unit is used for returning to the step of extracting the features of the current countermeasure image to obtain the countermeasure extraction features until the second confidence difference value reaches the confidence difference threshold value, and taking the current countermeasure image as the target countermeasure image corresponding to the target image.
In some embodiments, target confrontation image derivation module 810 includes:
and the updated current confrontation image obtaining unit is used for adjusting the pixel value of the current confrontation image based on the interference adjustment value to obtain the updated current confrontation image.
And the target countermeasure image acquisition unit is used for calculating a current image difference value between the updated current countermeasure image and the original countermeasure image corresponding to the current countermeasure image, returning to the step of performing feature extraction on the current countermeasure image when the current image difference value is smaller than an image difference threshold value to obtain a countermeasure extraction feature until the current image difference value reaches the image difference threshold value, and taking the current countermeasure image as the target countermeasure image corresponding to the target image.
In some embodiments, the interference adjustment value includes a pixel adjustment value corresponding to each pixel point, and the target confrontation image obtaining module 810 includes:
and the pixel adjustment value obtaining unit is used for obtaining the pixel adjustment value corresponding to each pixel point in the current confrontation image respectively by deriving the loss value of the target image based on the pixel value of each pixel point of the current confrontation image.
And the pixel value adjusting unit is used for adjusting the pixel values of the pixels in the current confrontation image based on the pixel adjusting values respectively corresponding to the pixels to obtain a target confrontation image corresponding to the target image.
In some embodiments, the apparatus further comprises:
and the countermeasure confidence coefficient obtaining module is used for inputting the target countermeasure image into the image recognition model to be trained corresponding to the target image category to obtain the countermeasure confidence coefficient of the target countermeasure image corresponding to the target image category.
A target model loss value determination module for determining a target model loss value based on the confrontation confidence; the target model loss value is positively correlated with the confrontation confidence.
And the trained image recognition model obtaining module is used for adjusting the model parameters of the image recognition model based on the target model loss value to obtain the trained image recognition model.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, which are not described herein again. The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In some embodiments, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 9. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data involved in the image processing method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image processing method.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In some embodiments, there is further provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the above method embodiments when executing the computer program.
In some embodiments, a computer-readable storage medium is provided, in which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In some embodiments, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps in the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (15)
1. An image processing method, characterized in that the method comprises:
acquiring a target image of a target image category and a current confrontation image corresponding to the confrontation image category;
performing feature extraction on the target image to obtain target extraction features, and obtaining reference extraction features based on the target extraction features;
performing feature extraction on the current confrontation image to obtain confrontation extraction features;
calculating a feature difference value between the confrontation extraction feature and the reference extraction feature, and obtaining a target image loss value based on the feature difference value; the characteristic difference value and the target image loss value form a positive correlation;
and obtaining an interference adjustment value based on the target image loss value, and adjusting the pixel value of the current confrontation image based on the interference adjustment value to obtain a target confrontation image corresponding to the target image.
2. The method according to claim 1, wherein the target image is a plurality of target images, the extracting the features of the target image to obtain target extraction features, and obtaining the reference extraction features based on the target extraction features comprises:
performing feature extraction on each target image to obtain target extraction features corresponding to each target image;
acquiring current feature distribution, and calculating the current occurrence probability of each target extraction feature in the current feature distribution;
counting the current occurrence probability corresponding to each target extraction feature to obtain a current probability statistic value, and adjusting the feature distribution parameters corresponding to the current feature distribution to change the current probability statistic value towards a larger direction to obtain target feature distribution;
and acquiring a target representative feature corresponding to the target feature distribution to obtain the reference extraction feature.
3. The method of claim 2, wherein the current feature distribution comprises a first current feature distribution and a second current feature distribution, and the current occurrence probability comprises a first current occurrence probability corresponding to the first current feature distribution and a second current occurrence probability corresponding to the second current feature distribution;
the counting the current occurrence probability corresponding to each target extraction feature to obtain a current probability statistic value, and adjusting a feature distribution parameter corresponding to the current feature distribution to make the current probability statistic value change toward a larger direction, so as to obtain the target feature distribution, including:
acquiring a first current distribution weight corresponding to the first current feature distribution and a second current distribution weight corresponding to the second current feature distribution;
based on the first current distribution weight and the first current occurrence probability, performing weighted summation on the second current distribution weight and the second current occurrence probability to obtain a current occurrence probability corresponding to the target extraction feature;
counting the current occurrence probability corresponding to each target extraction feature to obtain a current probability statistic value, and adjusting the distribution parameters corresponding to the first current feature distribution and the second current feature distribution, the first current distribution weight and the second current distribution weight to change the current probability statistic value towards the increasing direction to obtain a first target feature distribution, a second target feature distribution, a first target distribution weight and a second target distribution weight.
4. The method according to claim 3, wherein the obtaining of the target representative feature corresponding to the target feature distribution and the obtaining of the reference extracted feature comprises:
determining a first target occurrence probability of the confrontation extraction features in the first target feature distribution, and obtaining a first weighted probability based on the first target distribution weight and the first target occurrence probability;
determining a second target occurrence probability of the confrontation extraction features in the second target feature distribution, and obtaining a second weighted probability based on the second target distribution weight and the second target occurrence probability;
selecting a feature distribution with the highest probability from the first target feature distribution and the second target feature distribution as a representative feature distribution based on the first weighted probability and the second weighted probability;
and acquiring the target representative features corresponding to the representative feature distribution as the reference extraction features.
5. The method of claim 1, wherein the deriving a target image loss value based on the feature difference value comprises:
obtaining a first image loss value based on the feature difference value, wherein the feature difference value and the first image loss value form a positive correlation;
determining a first confidence level of a current confrontation image in the target image category based on the confrontation extraction features;
obtaining a second image loss value based on the first confidence coefficient; the second image loss value is in a negative correlation with the first confidence;
and obtaining a target image loss value based on the first image loss value and the second image loss value.
6. The method of claim 5, wherein deriving a second image loss value based on the first confidence level comprises:
determining a second confidence level of the current confrontation image in the confrontation image category based on the confrontation extraction features;
obtaining a second image loss value based on a first confidence difference value between the first confidence and the second confidence; the second image loss value is in a negative correlation with the first confidence difference value.
7. The method of claim 1, wherein the deriving a target image loss value based on the feature difference value comprises:
obtaining a first image loss value based on the feature difference value, wherein the feature difference value and the first image loss value form a positive correlation;
determining the reference confidence of the current confrontation image in each reference image category based on confrontation extraction features;
selecting the maximum confidence coefficient from the reference confidence coefficients as a third confidence coefficient;
obtaining a third image loss value based on a second confidence difference value between the first confidence and the third confidence; the third image loss value and the second confidence difference value are in a negative correlation relationship;
and obtaining a target image loss value based on the first image loss value and the third image loss value.
8. The method of claim 7, wherein the adjusting the pixel values of the current countermeasure image based on the interference adjustment value to obtain a target countermeasure image corresponding to the target image comprises:
when the second confidence difference value is smaller than a confidence difference threshold value, adjusting the pixel value of the current confrontation image based on the interference adjustment value to obtain an updated current confrontation image;
and returning to the step of extracting the features of the current confrontation image to obtain the confrontation extraction features until the second confidence difference value reaches the confidence difference threshold value, and taking the current confrontation image as the target confrontation image corresponding to the target image.
9. The method of claim 1, wherein the adjusting the pixel values of the current countermeasure image based on the interference adjustment value to obtain a target countermeasure image corresponding to the target image comprises:
adjusting the pixel value of the current confrontation image based on the interference adjustment value to obtain an updated current confrontation image;
calculating a current image difference value between the updated current countermeasure image and an original countermeasure image corresponding to the current countermeasure image, returning to the step of performing feature extraction on the current countermeasure image when the current image difference value is smaller than an image difference threshold value to obtain countermeasure extraction features until the current image difference value reaches the image difference threshold value, and taking the current countermeasure image as a target countermeasure image corresponding to the target image.
10. The method of claim 1, wherein the interference adjustment value comprises a pixel adjustment value corresponding to each pixel point, and the adjusting the pixel value of the current confrontation image based on the interference adjustment value to obtain the target confrontation image corresponding to the target image comprises:
deriving the target image loss value based on the pixel value of each pixel point of the current confrontation image to obtain a pixel adjustment value corresponding to each pixel point in the current confrontation image;
and adjusting the pixel values of the pixel points in the current confrontation image based on the pixel adjustment values respectively corresponding to the pixel points to obtain a target confrontation image corresponding to the target image.
11. The method of claim 1, further comprising:
inputting the target confrontation image into an image recognition model to be trained corresponding to the target image category to obtain confrontation confidence of the target confrontation image corresponding to the target image category;
determining a target model loss value based on the confrontation confidence; the target model loss value is in a positive correlation with the confrontation confidence;
and adjusting model parameters of the image recognition model based on the target model loss value to obtain a trained image recognition model.
12. An image processing apparatus, characterized in that the apparatus comprises:
the current confrontation image acquisition module is used for acquiring a target image of a target image category and a current confrontation image corresponding to the confrontation image category;
a reference extraction feature obtaining module, configured to perform feature extraction on the target image to obtain a target extraction feature, and obtain a reference extraction feature based on the target extraction feature;
the confrontation extraction feature obtaining module is used for carrying out feature extraction on the current confrontation image to obtain confrontation extraction features;
a target image loss value obtaining module, configured to calculate a feature difference value between the confrontation extraction feature and the reference extraction feature, and obtain a target image loss value based on the feature difference value; the characteristic difference value and the target image loss value form a positive correlation;
and the target confrontation image obtaining module is used for obtaining an interference adjustment value based on the target image loss value, adjusting the pixel value of the current confrontation image based on the interference adjustment value, and obtaining a target confrontation image corresponding to the target image.
13. The apparatus of claim 12, wherein the target image is a plurality of target images, and the reference extraction feature obtaining module comprises:
the target extraction feature obtaining unit is used for extracting features of the target images to obtain target extraction features corresponding to the target images;
the current appearance possibility calculation unit is used for acquiring current feature distribution and calculating the current appearance possibility of each target extraction feature in the current feature distribution;
a target feature distribution obtaining unit, configured to count current occurrence probability corresponding to each target extraction feature, so that a current probability statistic value changes towards a direction of increasing, to obtain a current probability statistic value, and adjust a feature distribution parameter corresponding to current feature distribution, to obtain target feature distribution;
and the reference extraction feature obtaining unit is used for obtaining the target representative features corresponding to the target feature distribution to obtain the reference extraction features.
14. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any one of claims 1 to 11 when executing the computer program.
15. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110177584.XA CN113569611A (en) | 2021-02-08 | 2021-02-08 | Image processing method, image processing device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110177584.XA CN113569611A (en) | 2021-02-08 | 2021-02-08 | Image processing method, image processing device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113569611A true CN113569611A (en) | 2021-10-29 |
Family
ID=78161216
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110177584.XA Pending CN113569611A (en) | 2021-02-08 | 2021-02-08 | Image processing method, image processing device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113569611A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114140349A (en) * | 2021-11-24 | 2022-03-04 | 支付宝(杭州)信息技术有限公司 | Method and device for generating interference image |
CN114386156A (en) * | 2022-03-23 | 2022-04-22 | 四川新迎顺信息技术股份有限公司 | BIM-based hidden member display method, device, equipment and readable storage medium |
CN114565513A (en) * | 2022-03-15 | 2022-05-31 | 北京百度网讯科技有限公司 | Method and device for generating confrontation image, electronic equipment and storage medium |
CN117454380A (en) * | 2023-12-22 | 2024-01-26 | 鹏城实验室 | Malicious software detection method, training method, device, equipment and medium |
WO2024179575A1 (en) * | 2023-03-02 | 2024-09-06 | 腾讯科技(深圳)有限公司 | Data processing method, and device and computer-readable storage medium |
-
2021
- 2021-02-08 CN CN202110177584.XA patent/CN113569611A/en active Pending
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114140349A (en) * | 2021-11-24 | 2022-03-04 | 支付宝(杭州)信息技术有限公司 | Method and device for generating interference image |
CN114565513A (en) * | 2022-03-15 | 2022-05-31 | 北京百度网讯科技有限公司 | Method and device for generating confrontation image, electronic equipment and storage medium |
CN114386156A (en) * | 2022-03-23 | 2022-04-22 | 四川新迎顺信息技术股份有限公司 | BIM-based hidden member display method, device, equipment and readable storage medium |
WO2024179575A1 (en) * | 2023-03-02 | 2024-09-06 | 腾讯科技(深圳)有限公司 | Data processing method, and device and computer-readable storage medium |
CN117454380A (en) * | 2023-12-22 | 2024-01-26 | 鹏城实验室 | Malicious software detection method, training method, device, equipment and medium |
CN117454380B (en) * | 2023-12-22 | 2024-03-01 | 鹏城实验室 | Malicious software detection method, training method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11995155B2 (en) | Adversarial image generation method, computer device, and computer-readable storage medium | |
CN113569611A (en) | Image processing method, image processing device, computer equipment and storage medium | |
CN110659485B (en) | Method and apparatus for detecting fight attacks through decoy training | |
Epaillard et al. | Variational Bayesian learning of generalized Dirichlet-based hidden Markov models applied to unusual events detection | |
Terhörst et al. | Suppressing gender and age in face templates using incremental variable elimination | |
US11449788B2 (en) | Systems and methods for online annotation of source data using skill estimation | |
Możejko et al. | Inhibited softmax for uncertainty estimation in neural networks | |
CN113254927B (en) | Model processing method and device based on network defense and storage medium | |
CN111652290A (en) | Detection method and device for confrontation sample | |
CN114417427A (en) | Deep learning-oriented data sensitivity attribute desensitization system and method | |
CN113705596A (en) | Image recognition method and device, computer equipment and storage medium | |
CN115937596A (en) | Target detection method, training method and device of model thereof, and storage medium | |
CN111046957B (en) | Model embezzlement detection method, model training method and device | |
CN115758337A (en) | Back door real-time monitoring method based on timing diagram convolutional network, electronic equipment and medium | |
CN114419346B (en) | Model robustness detection method, device, equipment and medium | |
Venkat et al. | Recognizing occluded faces by exploiting psychophysically inspired similarity maps | |
CN113780363B (en) | Method, system, computer and medium for defending countermeasures | |
Hu et al. | Manifold-based Shapley for SAR Recognization Network Explanation | |
CN111950629A (en) | Method, device and equipment for detecting confrontation sample | |
CN117274658A (en) | Method and device for generating countermeasure sample | |
Chen et al. | STPD: Defending against ℓ0-norm attacks with space transformation | |
Goodman et al. | A generative approach to open set recognition using distance-based probabilistic anomaly augmentation | |
CN113723215B (en) | Training method of living body detection network, living body detection method and device | |
Zheng et al. | GONE: A generic O (1) NoisE layer for protecting privacy of deep neural networks | |
CN118690365A (en) | Attack detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40054013 Country of ref document: HK |
|
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |