CN112949675A - Method and device for interfering image recognition system, readable storage medium and terminal - Google Patents

Method and device for interfering image recognition system, readable storage medium and terminal Download PDF

Info

Publication number
CN112949675A
CN112949675A CN202011529605.1A CN202011529605A CN112949675A CN 112949675 A CN112949675 A CN 112949675A CN 202011529605 A CN202011529605 A CN 202011529605A CN 112949675 A CN112949675 A CN 112949675A
Authority
CN
China
Prior art keywords
image recognition
interference model
recognition system
neural network
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011529605.1A
Other languages
Chinese (zh)
Inventor
秦豪
赵明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yogo Robot Co Ltd
Original Assignee
Shanghai Yogo Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yogo Robot Co Ltd filed Critical Shanghai Yogo Robot Co Ltd
Priority to CN202011529605.1A priority Critical patent/CN112949675A/en
Publication of CN112949675A publication Critical patent/CN112949675A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a device, a readable storage medium and a terminal for interfering an image recognition system, wherein the method comprises the following steps: randomly acquiring a plurality of data pictures with target objects, and establishing an attack sample set; constructing an interference model, superposing the randomly initialized interference model and a data picture of an attack sample set, inputting the superposed data picture into an image recognition neural network, training the interference model according to a preset loss function, and obtaining an interference model parameter for reducing the confidence coefficient of a characteristic diagram; and inputting the interference model parameters obtained by training into the working image recognition neural network. Compared with the prior art, beneficial effect lies in: the trained interference model is superposed on the picture sensed by the image recognition system, so that the image recognition system cannot recognize the target object, and the aim of interfering target recognition is fulfilled.

Description

Method and device for interfering image recognition system, readable storage medium and terminal
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of neural networks, in particular to a method and a device for interfering an image recognition system, a readable storage medium and a terminal.
[ background of the invention ]
As a class of machine learning methods, deep neural networks have gained wide attention in recent years due to their significant effects in many fields such as speech recognition, image classification, and object detection. However, deep neural network models that can achieve high accuracy at many tasks are vulnerable to attack in a hostile environment. In the countermeasure environment, the deep neural network is inputted with some malicious constructed countermeasure samples based on normal samples, such as pictures or voice information. These challenge samples are easily misclassified by the deep learning model, but it is difficult for a human observer to find a difference between the challenge samples and the normal samples. The generation of the research countermeasure sample becomes an important research field because the countermeasure sample can measure the robustness of different deep learning-based systems. Meanwhile, the confrontation samples can also be used as a data enhancement mode for training a more robust neural network.
However, the existing countermeasure samples are usually established based on normal samples and are difficult to distinguish by human beings, so in the countermeasure process in the design stage of the terminal device, due to the reliability problem of the terminal device, it is difficult for a worker to intuitively judge whether the countermeasure samples act or not.
In view of the above, it is desirable to provide a method, an apparatus, a readable storage medium and a terminal for interfering with an image recognition system to overcome the deficiencies of the prior art.
[ summary of the invention ]
The invention aims to provide a method for disturbing an image recognition system, which is used for attacking the image recognition system of a robot, so that the image recognition system cannot work, and further the image recognition system can be further improved.
In order to achieve the above object, the present invention provides a method of disturbing an image recognition system, comprising the steps of:
randomly acquiring a plurality of data pictures with target objects, and establishing an attack sample set;
constructing an interference model, superposing the randomly initialized interference model and a data picture of an attack sample set, inputting the superposed data picture into an image recognition neural network, training the interference model according to a preset loss function, and obtaining an interference model parameter for reducing the confidence coefficient of a characteristic diagram;
and inputting the interference model parameters obtained by training into the working image recognition neural network.
As an improvement of the method for disturbing the image recognition system, the method for establishing the attack sample set comprises the following steps:
randomly acquiring a plurality of data pictures with target objects in a business scene;
inputting the collected data picture into an image recognition neural network to obtain the position of a target object;
generating the box coordinates [ x1, y1, x2 and y2] of the target object, and establishing an attack sample set by corresponding the box coordinates [ x1, y1, x2 and y2] to the data pictures in a one-to-one mode.
As an improvement of the method of the invention for disturbing an image recognition system, the disturbance model is formed by K × 3, where K × K is the size of the disturbance model, 3 corresponds to three RGB channels of the image, and K is defined as follows according to the box coordinates of the target object:
Figure RE-GDA0003055581620000031
h and W are the sizes of the corresponding data pictures.
As an improvement of the method for interfering the image recognition system, the interference model is superposed at the center of the corresponding target object after random initialization, and the center coordinates (block _ x, block _ y) of the interference model are positioned as follows:
Figure RE-GDA0003055581620000032
h and W are the sizes of the corresponding data pictures.
As an improvement of the method for interfering with the image recognition system of the present invention, when the interference model is trained, the predetermined loss function is defined as:
Losst=sum(score*(score>0.1))
Lossblock=sum(||noise||)
Loss=Lossblock+Losst
wherein, Loss is a preset Loss function, score is the confidence coefficient of the characteristic diagram, Loss _ block is a punishment item to the interference model, and noise is an interference model parameter.
As an improvement of the method for interfering the image recognition system, the parameter of the image recognition neural network is not changed when the interference model is trained, the batch normalization layer is set as an inference mode, the interference model is returned by the loss gradient calculated by the back propagation algorithm according to the calculated loss function, and the interference model parameter is updated, wherein the formula is as follows:
noiseupdate=noise+lr*Δnoise
mask=||noiseupdate||>T
noise=mask*T+(1-mask)*noiseupdate
wherein, noise is an interference model parameter, noise _ update is an updated interference model parameter, lr is a learning rate and lr is 1e-1, mask is a value exceeding T in the updated interference model parameter, and T is a constraint threshold and is equal to 1.0.
The invention also provides a device for disturbing the image recognition system, which is used for executing the method for disturbing the image recognition system and comprises an attack sample construction module, a training module and an attack module;
the attack sample construction module is used for randomly acquiring a plurality of data pictures with target objects, inputting the data pictures into an image recognition neural network to obtain the positions of the target objects, and establishing an attack sample set according to the positions of the target objects and the corresponding data pictures;
the training module is used for constructing an interference model, superposing the randomly initialized interference model and a data picture of an attack sample set, inputting the superposed data picture into an image recognition neural network, training the interference model according to a preset loss function, and obtaining interference model parameters for reducing the confidence coefficient of a characteristic diagram;
and the attack module is used for inputting the interference model parameters obtained by training into the working image recognition neural network.
As an improvement of the method of the present invention for interfering with an image recognition system, the training module comprises:
the preprocessing unit is used for constructing an interference model and randomly initializing the interference model;
the confidence coefficient extraction unit is used for superposing the randomly initialized interference model and the attack sample set and then inputting the superposed interference model and the superposed attack sample set into an image recognition neural network to obtain the confidence coefficient of the feature map;
and the training unit is used for repeatedly returning the interference model through the loss gradient calculated by the back propagation algorithm according to the preset loss function, updating the parameters of the interference model until the preset iteration stop condition is reached, and storing the trained parameters of the interference model.
The invention also provides a readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of the above-mentioned interference image recognition system.
The invention also provides an included terminal comprising the readable storage medium and a processor, wherein the processor realizes the steps of the method for the interference image recognition system when executing the computer program on the readable storage medium.
Compared with the prior art, the method for interfering the image recognition system and the electronic equipment have the advantages that: the trained interference model is superposed on the picture sensed by the image recognition system, so that the image recognition system cannot recognize the target object, and the aim of interfering target recognition is fulfilled.
[ description of the drawings ]
Fig. 1 is a flowchart illustrating a method for disturbing an image recognition system according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of an apparatus for disturbing an image recognition system according to an embodiment of the present invention.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, if not conflicted, the various features of the embodiments of the invention may be combined with each other within the scope of protection of the invention. Additionally, while functional block divisions are performed in apparatus schematics, with logical sequences shown in flowcharts, in some cases, steps shown or described may be performed in sequences other than block divisions in apparatus or flowcharts.
The robot of embodiments of the present invention may be configured in any suitable shape to perform a particular business function operation, for example, the robot of embodiments of the present invention may be a delivery robot, a transfer robot, a care robot, and the like.
The invention provides, in a first aspect, a method of disturbing an image recognition system, for disturbing an image recognition system of a robot,
referring to fig. 1, in an embodiment of the invention, a method for disturbing an image recognition system includes the following steps:
step 101, randomly acquiring a plurality of data pictures with target objects, and establishing a confrontation sample set.
The effect of the countermeasure sample set is to provide the location of the target object such that the interference model has a trained goal, and for this purpose, the establishing of the countermeasure sample set comprises the steps of:
randomly acquiring a plurality of data pictures with target objects in a business scene, wherein the data pictures can be acquired through a camera on a robot;
inputting the collected data picture into an image recognition neural network to obtain the position of a target object, wherein the image recognition neural network is a neural network which is constructed based on algorithms such as YOLO, SSD and the like and has been trained for image recognition;
generating the box coordinates [ x1, y1, x2 and y2] of the target object, and establishing a confrontation sample set by corresponding the box coordinates [ x1, y1, x2 and y2] to the data picture, namely representing the position of the target object on the corresponding data picture by the box coordinates [ x1, y1, x2 and y2 ].
And 102, constructing an interference model, superposing the random initialization interference model and the data picture of the countermeasure sample set, inputting the superposed data picture into an image recognition neural network, training the interference model according to a preset loss function, and obtaining interference model parameters for reducing the confidence coefficient of the characteristic diagram.
Specifically, the interference model is formed by k × 3, where k × k is the size of the interference model, 3 corresponds to three RGB channels of the image, that is, the interference model is a square picture with a certain size, and k is defined as follows according to the square coordinates of the target object:
Figure RE-GDA0003055581620000071
where H and W are the sizes of the corresponding data pictures.
Here, after the interference model is initialized randomly, the value of the constraint parameter does not exceed T, T is a constraint threshold, and T is 1.0. It can be understood that the process of superimposing the interference model and the data picture is the process of adding block noise to the data picture, so that the purpose of interfering image recognition is achieved.
In a preferred embodiment, since the image recognition neural network obtains the position of the target object in the data picture through the position response of the target object, specifically, the image recognition neural network generates a response at the center position of the target object corresponding to the target object in the data picture, the interference model is randomly initialized and then superimposed on the center of the corresponding target object, so as to exert the maximum interference effect, that is, the center coordinates (block _ x, block _ y) of the interference model can be located as follows:
Figure RE-GDA0003055581620000072
then, a preset loss function is defined, because the image recognition neural network recognizes the target object through the feature map confidence, specifically, the feature map confidence value is between 0 and 1, and the higher the feature map confidence is, the higher the probability of obtaining the target object is, for example, when the feature map confidence of the data picture is 0.99, the probability of the target object appearing on the data picture is 99%, at this time, the image recognition neural network can recognize the target object, and when the feature map confidence of the data picture is 0.17, the probability of the target object appearing on the data picture is 17%, at this time, the image recognition neural network cannot recognize the target object.
Based on this, the default loss function is defined as:
Losst=sum(score*(score>0.1))
Lossblock=sum(||noise||)
Loss=Lossblock+Losst
wherein, Loss is a preset Loss function, score is a confidence coefficient of the characteristic diagram, Loss _ block is a punishment item to the interference model, and noise is an interference model parameter.
Then, when the interference model is trained, parameters of an image recognition neural network are not changed, the batch normalization layer is set to be an inference mode, the interference model is returned through a loss gradient calculated by a back propagation algorithm according to a calculated loss function, parameters of the interference model are updated until a preset iteration stop condition is reached, interference model parameters for reducing confidence coefficient of the characteristic diagram are obtained, and the interference model parameters are defined as follows:
noiseupdate=noise+lr*Δnoise
mask=||noiseupdate||>T
noise=mask*T+(1-mask)*noiseupdate
wherein noise _ update is an updated interference model parameter, lr is a learning rate and lr is 1e-1, and mask is a value exceeding T in the updated interference model parameter.
Step 103, inputting the interference model parameters obtained by training into the working image recognition neural network, so that after the image recognition neural network captures the target object, the target object cannot be recognized due to too small confidence of the characteristic diagram, and the purpose of attacking the image recognition neural network is achieved.
For example, when the robot needs to enter the elevator, before the image recognition neural network inputs the interference model parameters obtained by training, the confidence coefficient of the image recognition neural network to the feature map of the elevator door should be 0.99, at this time, the robot can recognize the elevator door and perform the next work through the elevator door, after the image recognition neural network inputs the interference model parameters obtained by training, the confidence coefficient of the image recognition neural network to the feature map of the elevator door is reduced to 0.17, at this time, the robot cannot recognize the elevator door, and the robot cannot continue to work.
In conclusion, the method for interfering with the image recognition system provided by the invention enables the image recognition system to be unable to recognize the target object by superposing the trained interference model on the picture sensed by the image recognition system, thereby achieving the purpose of interfering with the target recognition.
The second aspect of the present invention provides an apparatus 100 for interfering with an image recognition system, which is used for attacking the image recognition system (i.e. an image recognition neural network) of a robot by the above method for interfering with the image recognition system, so that the implementation principle of the apparatus for interfering with the image recognition system is consistent with that of the method for interfering with the image recognition system, and therefore, the description thereof is omitted here for brevity.
Referring to fig. 2, in an embodiment of the present invention, the apparatus 100 for interfering with the image recognition system includes an anti-sample constructing module 10, a training module 20, and an attack module 30, which are as follows:
the countermeasure sample construction module 10 is configured to randomly acquire a plurality of data pictures with a target object, input the data pictures into an image recognition neural network to obtain a position of the target object, and then establish a countermeasure sample set according to the position of the target object and the corresponding data pictures.
And the training module 20 is configured to construct an interference model, superimpose the randomly initialized interference model and the data picture of the attack sample set, input the superimposed data picture into an image recognition neural network, train the interference model according to a preset loss function, and obtain an interference model parameter for reducing the confidence coefficient of the feature map.
And the attack module 30 is used for inputting the interference model parameters obtained by training into the working image recognition neural network.
Further, the training module 20 includes:
the preprocessing unit is used for constructing an interference model and randomly initializing the interference model;
a confidence coefficient extraction unit for superposing the randomly initialized interference model and the attack sample set and inputting the superposed result into an image recognition neural network to obtain the confidence coefficient of the characteristic diagram
And the training unit is used for repeatedly returning the interference model through the loss gradient calculated by the back propagation algorithm according to the preset loss function, updating the parameters of the interference model until the preset iteration stop condition is reached, and storing the trained parameters of the interference model.
A further aspect of the present invention provides a readable storage medium (not shown in the drawings), which stores a computer program for implementing the steps of the method of the interference image recognition system according to any one of the above embodiments.
The present invention also provides a terminal (not shown in the figures), which includes the readable storage medium of the above embodiment and a processor, and when the processor executes the computer program on the readable storage medium, the processor implements the steps of the method of the interference image recognition system of any one of the above embodiments.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed system or apparatus/terminal device and method can be implemented in other ways. For example, the above-described system or apparatus/terminal device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The invention is not limited solely to that described in the specification and embodiments, and additional advantages and modifications will readily occur to those skilled in the art, so that the invention is not limited to the specific details, representative apparatus, and illustrative examples shown and described herein, without departing from the spirit and scope of the general concept as defined by the appended claims and their equivalents.

Claims (10)

1. A method of jamming an image recognition system, comprising the steps of:
randomly acquiring a plurality of data pictures with target objects, and establishing an attack sample set;
constructing an interference model, superposing the randomly initialized interference model and a data picture of an attack sample set, inputting the superposed data picture into an image recognition neural network, training the interference model according to a preset loss function, and obtaining an interference model parameter for reducing the confidence coefficient of a characteristic diagram;
and inputting the interference model parameters obtained by training into the working image recognition neural network.
2. The method of disturbing an image recognition system of claim 1, wherein the creating of the attack sample set comprises the steps of:
randomly acquiring a plurality of data pictures with target objects in a business scene;
inputting the collected data picture into an image recognition neural network to obtain the position of a target object;
generating the box coordinates [ x1, y1, x2 and y2] of the target object, and establishing an attack sample set by corresponding the box coordinates [ x1, y1, x2 and y2] to the data pictures in a one-to-one mode.
3. The method of disturbing the image recognition system of claim 2, wherein the disturbance model is formed of K x 3, wherein K x K is the magnitude of the disturbance model, 3 corresponds to three RGB channels of the image, and K is defined according to the box coordinates of the target object as follows:
Figure FDA0002851680990000011
h and W are the sizes of the corresponding data pictures.
4. A method of disturbing an image recognition system according to claim 3, characterized in that the disturbance model is randomly initialized and superimposed on the center of the corresponding target object, and the center coordinates (block _ x, block _ y) of the disturbance model are located as:
Figure FDA0002851680990000021
h and W are the sizes of the corresponding data pictures.
5. The method of disturbing an image recognition system of claim 1, wherein the predetermined loss function is defined as:
Losst=sum(score*(score>0.1))
Lossblock=sum(||noise||)
Loss=Lossblock+Losst
wherein, Loss is a preset Loss function, score is the confidence coefficient of the characteristic diagram, Loss _ block is a punishment item to the interference model, and noise is an interference model parameter.
6. The method of claim 5, wherein the interference model is trained without changing parameters of the image recognition neural network, the batch normalization layer is set as an inference mode, and the interference model parameters are updated by feeding back the interference model with the loss gradient calculated by the back propagation algorithm according to the calculated loss function, wherein the formula is as follows:
noiseupdate=noise+lr*Δnoise
mask=||noiseupdate||>T
noise=mask*T+(1-mask)*noiseupdate
wherein, noise is an interference model parameter, noise _ update is an updated interference model parameter, lr is a learning rate and lr is 1e-1, mask is a value exceeding T in the updated interference model parameter, and T is a constraint threshold and is equal to 1.0.
7. An apparatus for disturbing an image recognition system, for performing the method of disturbing an image recognition system according to any of claims 1 to 6, characterized in that the apparatus comprises an attack sample construction module, a training module and an attack module;
the attack sample construction module is used for randomly acquiring a plurality of data pictures with target objects, inputting the data pictures into an image recognition neural network to obtain the positions of the target objects, and establishing an attack sample set according to the positions of the target objects and the corresponding data pictures;
the training module is used for constructing an interference model, superposing the randomly initialized interference model and a data picture of an attack sample set, inputting the superposed data picture into an image recognition neural network, training the interference model according to a preset loss function, and obtaining interference model parameters for reducing the confidence coefficient of a characteristic diagram;
and the attack module is used for inputting the interference model parameters obtained by training into the working image recognition neural network.
8. The apparatus for jamming an image recognition system according to claim 7, wherein the training module comprises:
the preprocessing unit is used for constructing an interference model and randomly initializing the interference model;
the confidence coefficient extraction unit is used for superposing the randomly initialized interference model and the attack sample set and then inputting the superposed interference model and the superposed attack sample set into an image recognition neural network to obtain the confidence coefficient of the feature map;
and the training unit is used for repeatedly returning the interference model through the loss gradient calculated by the back propagation algorithm according to the preset loss function, updating the parameters of the interference model until the preset iteration stop condition is reached, and storing the trained parameters of the interference model.
9. A readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method of perturbing an image recognition system according to any of the claims 1 to 6.
10. A terminal, characterized in that it comprises a readable storage medium according to claim 9 and a processor which, when executing a computer program on the readable storage medium, carries out the steps of the method of disturbing an image recognition system according to any of claims 1-6.
CN202011529605.1A 2020-12-22 2020-12-22 Method and device for interfering image recognition system, readable storage medium and terminal Pending CN112949675A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011529605.1A CN112949675A (en) 2020-12-22 2020-12-22 Method and device for interfering image recognition system, readable storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011529605.1A CN112949675A (en) 2020-12-22 2020-12-22 Method and device for interfering image recognition system, readable storage medium and terminal

Publications (1)

Publication Number Publication Date
CN112949675A true CN112949675A (en) 2021-06-11

Family

ID=76234809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011529605.1A Pending CN112949675A (en) 2020-12-22 2020-12-22 Method and device for interfering image recognition system, readable storage medium and terminal

Country Status (1)

Country Link
CN (1) CN112949675A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920425A (en) * 2021-09-03 2022-01-11 佛山中科云图智能科技有限公司 Target violation point acquisition method and system based on neural network model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200285952A1 (en) * 2019-03-08 2020-09-10 International Business Machines Corporation Quantifying Vulnerabilities of Deep Learning Computing Systems to Adversarial Perturbations
CN111967584A (en) * 2020-08-19 2020-11-20 北京字节跳动网络技术有限公司 Method, device, electronic equipment and computer storage medium for generating countermeasure sample

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200285952A1 (en) * 2019-03-08 2020-09-10 International Business Machines Corporation Quantifying Vulnerabilities of Deep Learning Computing Systems to Adversarial Perturbations
CN111967584A (en) * 2020-08-19 2020-11-20 北京字节跳动网络技术有限公司 Method, device, electronic equipment and computer storage medium for generating countermeasure sample

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920425A (en) * 2021-09-03 2022-01-11 佛山中科云图智能科技有限公司 Target violation point acquisition method and system based on neural network model

Similar Documents

Publication Publication Date Title
Wang et al. SaliencyGAN: Deep learning semisupervised salient object detection in the fog of IoT
WO2019105163A1 (en) Target person search method and apparatus, device, program product and medium
CN110363817B (en) Target pose estimation method, electronic device, and medium
CN114186632B (en) Method, device, equipment and storage medium for training key point detection model
CN110378480B (en) Model training method and device and computer readable storage medium
CN104966079A (en) Distinguishing live faces from flat surfaces
CN110648397B (en) Scene map generation method and device, storage medium and electronic equipment
CN111881804B (en) Posture estimation model training method, system, medium and terminal based on joint training
CN110163111A (en) Method, apparatus of calling out the numbers, electronic equipment and storage medium based on recognition of face
CN109919085B (en) Human-human interaction behavior identification method based on light-weight convolutional neural network
CN111612841A (en) Target positioning method and device, mobile robot and readable storage medium
CN114387647B (en) Anti-disturbance generation method, device and storage medium
CN110852311A (en) Three-dimensional human hand key point positioning method and device
CN111414888A (en) Low-resolution face recognition method, system, device and storage medium
CN111583399A (en) Image processing method, device, equipment, medium and electronic equipment
US20200005078A1 (en) Content aware forensic detection of image manipulations
CN112381010A (en) Table structure restoration method, system, computer equipment and storage medium
KR20220065234A (en) Apparatus and method for estimating of 6d pose
CN112949675A (en) Method and device for interfering image recognition system, readable storage medium and terminal
GB2607440A (en) Method and apparatus for determining encryption mask, device and storage medium
CN113221767B (en) Method for training living body face recognition model and recognizing living body face and related device
Gheitasi et al. Estimation of hand skeletal postures by using deep convolutional neural networks
CN111382791B (en) Deep learning task processing method, image recognition task processing method and device
WO2023137923A1 (en) Person re-identification method and apparatus based on posture guidance, and device and storage medium
CN113255512B (en) Method, apparatus, device and storage medium for living body identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination