CN110991603B - Local robustness verification method of neural network - Google Patents

Local robustness verification method of neural network Download PDF

Info

Publication number
CN110991603B
CN110991603B CN201911011197.8A CN201911011197A CN110991603B CN 110991603 B CN110991603 B CN 110991603B CN 201911011197 A CN201911011197 A CN 201911011197A CN 110991603 B CN110991603 B CN 110991603B
Authority
CN
China
Prior art keywords
neural network
abstract
layer
local
robustness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911011197.8A
Other languages
Chinese (zh)
Other versions
CN110991603A (en
Inventor
张立军
李建霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Institute Of Intelligent Software Industry
Original Assignee
Guangzhou Institute Of Intelligent Software Industry
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Institute Of Intelligent Software Industry filed Critical Guangzhou Institute Of Intelligent Software Industry
Priority to CN201911011197.8A priority Critical patent/CN110991603B/en
Publication of CN110991603A publication Critical patent/CN110991603A/en
Application granted granted Critical
Publication of CN110991603B publication Critical patent/CN110991603B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention relates to a local robustness verification method of a neural network, which comprises the following steps: inputting the neural network obtained through training, and processing the semantics of the neural network to obtain a model; based on the abstract interpretation, abstracting the semantics of the neural network into operations on the abstract domain of the mathematical structure (or transformations on the abstract domain); given input and small disturbances; abstracting input and micro disturbance into elements in an abstract domain, introducing a symbol propagation method, and converting a neural network local robustness verification problem into an abstract domain operation and judgment problem; and outputting an operation result so as to confirm whether the local robustness of the neural network is verified.

Description

Local robustness verification method of neural network
Technical Field
The invention relates to the technical field of software verification, in particular to a local robustness verification method of a neural network.
Background
Deep learning is one direction of research in the field of machine learning. Deep learning has advanced significantly in terms of speech and machine vision through recent developments. Deep learning methods based on neural networks are currently developing. The neural network is an approximate artificial intelligence, the thinking activity of human neurons is simulated by hierarchical nodes with connection relations, input samples are learned and trained, and compared with expected output, the connection weight of the network is readjusted through differential indexes such as a loss function and the like, so that the identification accuracy of new samples is improved.
In the field of computer vision research, the challenge sample is the popular direction of research. By contrast, a sample is meant a sample that is slightly perturbed to induce a neural network to produce a false recognition result, and that is not considered to be altered when viewed by a real person, e.g., a simple digital picture is added with a noise signal, and the human typically ignores the noise effect to produce a recognition result of "3", while some neural networks produce a false classification of "5". Thus, neural networks need to be designed with attention to verification problems of local robustness. The local robustness of the neural network is to verify whether there are antagonistic samples within a given input and a range of small perturbations. Generally, for a specific example, given a certain sample, it is relatively simple to determine whether the sample is an challenge sample, but it is relatively difficult to formally verify whether the neural network has the challenge sample under a certain condition, and the training and adjustment process of the neural network itself is very complex and time-consuming, so that it is highly desirable to provide an formal verification technique to verify the local robustness of the neural network. However, the formal verification technology has not been broken through in the aspect of local robustness verification of the neural network at present due to the lack of a certain interpretability of the neural network.
Disclosure of Invention
In view of the above, it is necessary to provide a local robustness verification method for a neural network, which combines abstract interpretation and characteristics of the neural network, performs verification of the security property of the neural network by a symbol propagation method, has reliability in theory, can verify a large-scale neural network, and has high verification speed.
A method of local robustness verification of a neural network, comprising:
inputting the neural network obtained through training, and processing the semantics of the neural network to obtain a model;
based on abstract interpretation, abstracting the semantics of the neural network into operations on an abstract domain of a mathematical structure;
given input and small disturbances;
abstracting input and micro disturbance into elements in an abstract domain, introducing a symbol propagation method, and converting a neural network local robustness verification problem into an operation and judgment problem on the abstract domain;
and outputting an operation result so as to confirm whether the local robustness of the neural network is verified.
The neural network specifically comprises a neural network with ReLu as an activation function, and has a full connection layer, a convolution layer and a pooling layer.
The method comprises the following steps: the behavior of the fully connected, convolutional and pooling layers of the neural network with ReLU as an activation function is handled by an abstract transformer.
The abstract domain at least comprises an interval abstract domain, a symmetrical polyhedron abstract domain or a convex polyhedron abstract domain.
The workflow of the whole algorithm comprises:
reading a neural network, inputting x and disturbing d;
establishing and initializing a related data structure of the symbolic representation;
analyzing the value of the neural network under the local area layer by layer and neuron by neuron;
analyzing at the output layer whether an challenge sample is present in the upper approximation sense;
and outputting an analysis result.
The analysis of the neural network under the local area layer by layer and neuron by neuron is carried out by updating and processing the symbolic representation according to the voice of the neural network, and the abstract element is applied with operation in the abstract domain (or transformation in the abstract domain), and concrete contents are given in concrete implementation.
The invention combines abstract interpretation, and provides a symbol propagation method according to the characteristics of the neural network, and provides a neural network security verification method. The method takes the theoretical basis of the classical method of abstract interpretation in the field of program analysis as a basic stone, solves the problems of challenge sample and security verification in the neural network to a certain degree from theory and practice, and has the characteristics of higher theoretical reliability, larger verification network protocol, higher verification speed and the like compared with the prior art.
Drawings
FIG. 1 is a flow diagram of a method of local robustness verification of a neural network in one embodiment of the invention;
FIG. 2 is a comparison of different abstract domains in an embodiment of the invention.
Detailed Description
The technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings. Based on the operational procedures and the reinforcement methods of the present invention, all other tools that a person skilled in the art would obtain without making any inventive effort are within the scope of the present invention.
The embodiment of the invention designs and realizes a verification tool (called deep symbol in the embodiment) aiming at the phenomenon that a neural network possibly has an antagonistic sample. Considering the local robustness of the neural network, i.e. verifying whether there are antagonistic samples in the range of a given input x and a small disturbance d, the input range of each layer of neurons is approximated using abstract elements, while the computation from layer to layer is reliably modeled by domain operations in the abstract domain. The abstract interpretation is based on strict theory, and the reliability of reasoning based on the upper approximation abstract is guaranteed. Based on the properties derived from the above approximate abstract reasoning, it must be true in neural networks. However, the accuracy loss caused by the approximation in the abstraction cannot ensure that all the properties established in the neural network can be obtained by inference.
Deep symbol is a reliable and scalable deep neural network analyzer, the workflow of which is shown in figure one. Based on the above approximation, deep symbol can automatically prove the safety properties (e.g., local robustness) of a neural network (e.g., convolutional neural network) in reality.
Deep symbol uses classical abstract interpretation techniques to make inferences about neural network security and robustness. Deep symbol handles the behavior of the fully connected layer, convolutional layer and pooling layer (max pooling) of a neural network with a linear rectification function (ReLU) as an activation function by an abstract transformer. And introducing symbol propagation techniques, deep symbol can give more accurate results if a neural network has a more stable pattern (with a large number of neurons always active or inactive) for a given local robustness.
Considering the local robustness, the disturbance of delta size of each dimension under a given input is described as an element in an interval abstract domain in abstract interpretation, then linear operation is carried out between a full connection layer and a convolution layer according to weight and bias, and then activation function processing is carried out. By the operation of abstract elements, whether an antagonism sample which can be misclassified in the disturbance range exists or not is possible in the upper approximation sense, and if not, the local robustness is considered to be verified.
When the operation is carried out, the collection in the space is approximated by different abstract domains, but the precision and the efficiency of the different abstract domains are different, the abstract domains comprise an interval abstract domain, a symmetrical polyhedron, a polyhedron abstract domain and the like, and particularly if two full-connection layers are arranged at the abstract domain, two neurons at each layer can see the value condition of the same operation and the same previous array, the variable range of the obtained operation result is quite different, the polyhedron with the best performance is the interval abstract domain with the worst performance.
When the ReLU activation function is processed, the deep symbol firstly intersects the abstract elements input by the corresponding activation function with the constraints of the two expressed positive half shafts and the negative half shafts respectively to obtain two abstract elements which are respectively corresponding to the activation and the deactivation of the neurons, then the variables of the elements which are not activated are projected to the coordinate axes, and finally the two elements are combined to obtain the abstract elements corresponding to the activation function output.
The working steps of deep symbol are as follows, first step, reading the network, inputting x and disturbance d, and second step, building and initializing the relevant data structure of the symbology. Thirdly, analyzing the value of the network under the local area layer by layer and neuron by neuron, and fourthly, analyzing whether an countermeasure sample exists in the upper approximation sense at an output layer, and fifthly, outputting the analysis result (whether the property is verified), the value range of each neuron, the running time and other performance indexes.
Specifically, in the third step, for different neuron processing methods, there is first an abstract element n from a selected abstract domain (such as an interval abstract domain) # ∈N # A free symbolic variable set C (free symbolic variables refer to those variables that cannot be linearly represented by other symbolic variables), a constraint symbolic variable set S (constraint symbolic variables can be linearly represented by free symbolic variables), and a mapping from constraint symbolic variables to free symbolic variable expressionsLinear assignment transform->The algorithm of (2) is given in table one. The definition of the conditional test transformation is as follows:the pseudocode for Join operations is given by Table two. For a ReLU neuron, the abstract transformation is defined as follows:
wherein the method comprises the steps ofFor neuron d of the pooling layer: =max 1≤i≤k c i If there is one C j Lower bound of (C) than other C i If the upper bound is large, then directly take C j As a symbolic representation of d, otherwise an abstract transformation is defined according to the following equation:
join(φ 1 ,join(φ 2 ,...,join(φ k-1 ,φ k )))
wherein phi is i =[[d:=c i ]] # [[c i ≥c 1 ]] # ...[[c i ≥c k ]] # (n#,C,S,ξ)。
Specifically, table 1:
table 2:
in summary, the innovation point of deep symbol is that the symbol propagation technology is designed and implemented according to the characteristic of local robustness, and the symbol propagation technology is proved to have a significant effect in the verification of the neural network through experiments. Because a large number of neurons are always activated or not activated in the sense of local robustness, the ReLU function is degraded into a linear function or constant 0, and the accuracy of abstract interpretation can be improved by using a symbol propagation technology, and the general optimization method has the advantage that the interval abstract domain and the polyhedral abstract domain are obviously improved after symbol propagation is added through experiments. When verifying a large-scale convolutional neural network, the use of a complex abstract domain often faces the difficulty that memory occupation is too large and impractical, and experiments prove that better effects can be obtained by using the abstract domain added into a symbol propagation interval. Meanwhile, the abstract interpretation is used for helping the SMT solver to analyze the obtained value range of each neuron, and when the opposite sample property of the Acasxu network is verified, the value constraint significantly accelerates the solving speed, and the acceleration ratio reaches 549.43% (9.16 hours to 1.41 hours).
The foregoing examples illustrate only a few embodiments of the invention and are described in detail herein without thereby limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (4)

1. A method for verifying local robustness of a neural network, comprising:
inputting the neural network obtained through training, and processing the semantics of the neural network to obtain a neural network model;
based on abstract interpretation, abstracting the semantics of the neural network into operations or transformations on the abstract domain of the mathematical structure;
given image input and minor disturbances;
abstracting image input and micro disturbance into elements in an abstract domain, introducing a symbol propagation method, and converting the local robustness verification problem of the neural network into abstract domain operation;
outputting an operation result so as to confirm whether the local robustness of the neural network is verified;
the abstract domain operation comprises the following steps:
reading a neural network model, image input and disturbance;
establishing and initializing a related data structure of the symbolic representation;
analyzing the value of the neural network under the local area layer by layer and neuron by neuron;
analyzing at the output layer whether an challenge sample is present;
and outputting an analysis result.
2. The method for verifying local robustness of a neural network according to claim 1, wherein the neural network specifically comprises a neural network with ReLu as an activation function, having a fully connected layer, a convolutional layer and a pooling layer.
3. The method of partial robustness verification of a neural network according to claim 2, characterized in that the method comprises: the behavior of the fully connected, convolutional and pooling layers of the neural network with ReLU as an activation function is handled by an abstract transformer.
4. A method of verifying local robustness of a neural network according to claim 2 or 3, wherein the abstract fields comprise at least an interval abstract field, a symmetrical polyhedral abstract field, or a convex polyhedral abstract field.
CN201911011197.8A 2019-10-23 2019-10-23 Local robustness verification method of neural network Active CN110991603B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911011197.8A CN110991603B (en) 2019-10-23 2019-10-23 Local robustness verification method of neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911011197.8A CN110991603B (en) 2019-10-23 2019-10-23 Local robustness verification method of neural network

Publications (2)

Publication Number Publication Date
CN110991603A CN110991603A (en) 2020-04-10
CN110991603B true CN110991603B (en) 2023-11-28

Family

ID=70082377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911011197.8A Active CN110991603B (en) 2019-10-23 2019-10-23 Local robustness verification method of neural network

Country Status (1)

Country Link
CN (1) CN110991603B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733941B (en) * 2021-01-12 2022-06-21 山东大学 High-robustness user classification method and system based on neural network
CN113378009B (en) * 2021-06-03 2023-12-01 上海科技大学 Binary decision diagram-based binary neural network quantitative analysis method
CN113673680B (en) * 2021-08-20 2023-09-15 上海大学 Model verification method and system for automatically generating verification properties through an antagonism network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506825A (en) * 2017-09-05 2017-12-22 河海大学 A kind of pumping plant fault recognition method
CN109359693A (en) * 2018-10-24 2019-02-19 国网上海市电力公司 A kind of Power Quality Disturbance Classification Method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7398259B2 (en) * 2002-03-12 2008-07-08 Knowmtech, Llc Training of a physical neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506825A (en) * 2017-09-05 2017-12-22 河海大学 A kind of pumping plant fault recognition method
CN109359693A (en) * 2018-10-24 2019-02-19 国网上海市电力公司 A kind of Power Quality Disturbance Classification Method

Also Published As

Publication number Publication date
CN110991603A (en) 2020-04-10

Similar Documents

Publication Publication Date Title
CN110991603B (en) Local robustness verification method of neural network
Thomas et al. A theoretical perspective on hyperdimensional computing
Huang et al. Solution Path for Pin-SVM Classifiers With Positive and Negative $\tau $ Values
Salazar On Statistical Pattern Recognition in Independent Component Analysis Mixture Modelling
Law et al. Ultrahyperbolic representation learning
CN112445957A (en) Social network abnormal user detection method, system, medium, equipment and terminal
CN114399025A (en) Graph neural network interpretation method, system, terminal and storage medium
CN114036308A (en) Knowledge graph representation method based on graph attention neural network
Converse et al. Probabilistic symbolic analysis of neural networks
Dong et al. Training robust support vector regression machines for more general noise
CN113673581B (en) Hard tag black box depth model countermeasure sample generation method and storage medium
CN113626685B (en) Rumor detection method and device oriented to propagation uncertainty
Chang Latent variable modeling for generative concept representations and deep generative models
CN112926052B (en) Deep learning model security hole testing and repairing method, device and system based on genetic algorithm
CN114118416A (en) Variational graph automatic encoder method based on multi-task learning
US20230394304A1 (en) Method and Apparatus for Neural Network Based on Energy-Based Latent Variable Models
Bouneffouf Computing the Dirichlet-multinomial log-likelihood function
Zhu et al. Structural Landmarking and Interaction Modelling: A “SLIM” Network for Graph Classification
Wei Network completion via deep metric learning
US20230421542A1 (en) Methods and systems for highly secure data analytic encryption and detection and extraction of truthful content
Ganea Non-Euclidean Neural Representation Learning of Words, Entities and Hierarchies
Lathrop Motion Planning Algorithms for Safety and Quantum Computing Efficiency
Salih et al. An optimized deep learning model for optical character recognition applications
Dong et al. Learning Syllogism with Euler Neural-Networks
US20240020553A1 (en) Interactive electronic device for performing functions of providing responses to questions from users and real-time conversation with the users using models learned by deep learning technique and operating method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant