CN113469339B - Automatic driving neural network robustness verification method and system based on dimension reduction - Google Patents

Automatic driving neural network robustness verification method and system based on dimension reduction Download PDF

Info

Publication number
CN113469339B
CN113469339B CN202110741891.6A CN202110741891A CN113469339B CN 113469339 B CN113469339 B CN 113469339B CN 202110741891 A CN202110741891 A CN 202110741891A CN 113469339 B CN113469339 B CN 113469339B
Authority
CN
China
Prior art keywords
neural network
layer
input
hyper
robust
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110741891.6A
Other languages
Chinese (zh)
Other versions
CN113469339A (en
Inventor
郭山清
唐朋
张云若
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202110741891.6A priority Critical patent/CN113469339B/en
Publication of CN113469339A publication Critical patent/CN113469339A/en
Application granted granted Critical
Publication of CN113469339B publication Critical patent/CN113469339B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3604Software analysis for verifying properties of programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an automatic driving neural network robustness verification method and system based on dimension reduction, comprising the following steps: generating a hyper-rectangular input set based on the input image data; dividing an image set of the input set under affine transformation of the first layer of the neural network according to a given width constraint delta, and searching for whether a subset which does not meet the robustness requirement exists in the image set; if not, the autopilot neural network is considered safe; otherwise, the autopilot neural network is considered unsafe. The method can effectively reduce the time complexity of the robustness verification of the neural network and improve the running speed and efficiency.

Description

Automatic driving neural network robustness verification method and system based on dimension reduction
Technical Field
The application relates to the technical field of trusted artificial intelligence, in particular to an automatic driving neural network robustness verification method and system based on dimension reduction.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
At present, the neural network is widely applied to the fields of natural language processing, voice recognition, image recognition, automatic driving, malicious software detection, medical treatment and the like. In particular, automatic driving has extremely high safety requirements. Investigation shows that autopilot causes multiple traffic accidents around the world, resulting in injury and even death of multiple people. This safety deficiency has attracted considerable attention and discussion of whether autopilot technology should continue to be developed, which severely hampers further use of autopilot. Studies have shown that neural networks are very sensitive to disturbances, which is the main cause of frequent autopilot accidents. When normal data is slightly perturbed, the neural network may misclassify it, and such perturbation does not affect his judgment for the human driver.
Because the neural network has potential safety hazards in actual use, the robustness of the neural network is verified before the actual application, so that the reliability of the neural network is improved. The robustness can ensure that all the classifications of the inputs in a certain neighborhood of the normal input are the same, and the neural network with the robustness can effectively resist misclassification caused by the disturbance. However, in the research of the existing verification method, when verifying a neural network with the number of nodes of each layer not larger than that of the previous layer, redundant calculation amount exists in the algorithms, so that the verification method is low in efficiency.
Disclosure of Invention
The application aims at providing an automatic driving neural network robustness verification method and system based on dimension reduction aiming at a neural network which meets the condition that the node number of an input layer is larger than that of a first layer, so that the time complexity can be effectively reduced, and the verification speed of the neural network can be improved.
In some embodiments, the following technical scheme is adopted:
a dimension reduction-based automatic driving neural network robustness verification method comprises the following steps:
generating a hyper-rectangular input set based on the input image data;
dividing an image set of the input set under affine transformation of the first layer of the neural network according to a given width constraint delta, and searching for whether a subset which does not meet the robustness requirement exists in the image set;
if not, the autopilot neural network is considered safe; otherwise, the autopilot neural network is considered unsafe.
Further, generating a hyper-rectangular input set based on the input image data specifically includes:
reorganizing the input image data into a k 0 Vector dimension and normalization;
determining an allowable error, and generating an input set according to the normalized vector c and the allowable error r; the input set is a hyper-rectangle determined by |x-c|r.
Further, according to a given width constraint δ, dividing the image set of the input set under affine transformation of the first layer of the neural network, and searching therein whether there is a subset that does not meet the robustness requirement, specifically includes:
the image set is partitioned into subsets and it is ensured that the maximum width of each subset is smaller than a given width limit delta, and then the subsets are searched for which the output reachable set estimate does not meet the output limit.
Further, the specific process comprises the following steps:
calculating the minimum hyper-rectangle containing the image set of the input set in the first layer according to the input set and affine transformation of the first layer of the neural network;
dividing the minimum hyper-rectangle calculated in the previous step into a plurality of small non-intersecting hyper-rectangles, namely dividing blocks, according to a given width limit delta & gt 0; and each segment has a width no greater than δ;
for each block, judging whether it is redundant;
for non-redundant partitions, it is determined whether each partition is robust.
Further, for each block, determining whether it is redundant specifically includes:
for the kth block Q k The original image set P of the input layer k And input set [ l ] 0 ,u 0 ]The intersection of (a) satisfies the following set of inequalities:
judging whether the intersection is empty by using a linear programming solver, and if so, judging that Q k Is superfluous and is eliminated; otherwise, Q k Not superfluous.
Further, for non-redundant blocks, determining whether each block is robust specifically includes:
solving output reachable set estimation corresponding to each unnecessary partition in the first layer by adopting a layer-by-layer propagation mode;
sequentially checking whether output reachable set estimation corresponding to each block meets output limit, if so, the block is robust; otherwise, the partition is not robust;
if all unnecessary partitions are robust, the neural network is considered secure; otherwise, the neural network is considered unsafe.
Further, according to a given width constraint δ, dividing the image set of the input set under affine transformation of the first layer of the neural network, and searching therein whether there is a subset that does not meet the robustness requirement, specifically includes:
step 101: an empty stack is initialized, denoted S.
Step 102: calculating a minimum hyper-rectangle containing the image set of the input set in the first layer, and pressing the hyper-rectangle into the stack top;
step 103: judging whether the stack S is currently empty or not; if the stack S is not empty, execute step 1031; otherwise, the neural network is considered to be safe;
step 1031: ejecting a stack top element Q, and judging whether the Q is redundant; if Q is redundant, returning to step 103; otherwise, go to step 1032;
step 1032: judging whether Q is robust or not, if so, returning to the step 103; otherwise, go to step 1033;
step 1033: judging whether the maximum width of Q is smaller than a given width limit delta; if so, the neural network is considered unsafe; otherwise, halving Q and pressing both small hyper-rectangles into the top of the stack, returning to step 103.
In other embodiments, the following technical solutions are adopted:
an automatic driving neural network robustness verification system based on dimension reduction, comprising:
means for generating a hyper-rectangular input set based on the input image data;
means for segmenting the image set of the input set under affine transformation of the first layer of the neural network according to a given width constraint δ and searching therein whether there is a subset that does not meet the robustness requirement; if not, the autopilot neural network is considered safe; otherwise, the autopilot neural network is considered unsafe.
In other embodiments, the following technical solutions are adopted:
a terminal device comprising a processor and a memory, the processor being configured to implement instructions; the memory is for storing a plurality of instructions adapted to be loaded by the processor and to perform the above-described dimension-reduction-based autopilot neural network robustness verification method.
In other embodiments, the following technical solutions are adopted:
a computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor of a terminal device and to perform the above-described dimension-reduction-based method of robustness verification of an autopilot neural network.
Compared with the prior art, the application has the beneficial effects that:
when verifying the neural network meeting the conditions, the application divides the image set of the input set at the first layer instead of the input set per se; since the dimension of the image set of the first layer is lower than the dimension of the input set, the number of subsets split is smaller, and thus verification can be completed in a shorter time.
The method can effectively reduce the time complexity of the robustness verification of the neural network and improve the running speed and efficiency.
Additional features and advantages of the application will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application.
Drawings
FIG. 1 is a block diagram of a feed forward network;
FIG. 2 is a flowchart of a method of searching for mode A in an embodiment of the present application;
FIG. 3 is a flow chart of a method of searching mode B (binary tree search) in an embodiment of the application;
FIG. 4 is an input image of a neural network in an embodiment of the present application;
FIG. 5 is an example of a first hidden layer (2-dimensional space) in an embodiment of the application;
fig. 6 (a) - (b) are respectively the results obtained by halving the hyper-rectangle in fig. 5 and the results of the image set after segmentation.
FIG. 7 is an exemplary illustration of search pattern B in an embodiment of the present application;
FIGS. 8 and 9 are respectively exemplary illustrations of search pattern A in an embodiment of the present application;
fig. 10 is a schematic structural diagram of a neural network in an embodiment of the present application.
Detailed Description
It should be noted that the following detailed description is illustrative and is intended to provide further explanation of the application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present application. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
Example 1
The deep neural network refers to a feed-forward neural network in the deep neural network, as shown in fig. 1. An n-layer feedforward neural network consists of an input layer and corresponding variablesAnd n hidden layers, corresponding to variablesThe last hidden layer is the output layer, corresponding variable +.>Wherein the ith layer consists of k i (i=1, …, n) nodes. Each node represents a real variable, so that the variable z is composed of all nodes of each layer i Can all be regarded as k i Weiou (a Chinese character)A vector in the space.
For any layer z of the hidden layer or the output layer i ,z i The values of all nodes in (a) are calculated as follows: to calculate the j-th node z of the i-th layer i,j First, all nodes of the previous layer are connectedRespectively multiplied by weightsPlus an offset b i,j The obtained value is recorded as +.>And then->By activating function sigma i,j Can obtain z i,j . The above calculation is written in vector form:
wherein w is i,j Is the weight w i,j,1 ,…,The column vectors that make up, T, represent the transpose.
The i-1 layer to i layer calculations can then be written as a function f i (x):
z i =f i (z i-1 )=σ i (W i z i +b i )
Wherein the weight matrixIs of all w i,j Matrix of components, offset vector->Is composed of all b i,j Component vector, activation function sigma i,j The requirements must be monotonically non-decreasing. Sigma (sigma) i :/>The activation functions of different nodes do not affect each other.
The entire network can thus be written as a function f (x):
wherein the method comprises the steps ofRepresenting a function complex, f (x): />f i (x):/>
The verification problem of the deep neural network is to verify whether the input-output relationship of one neural network is established. Given a feed-forward neural network f (x), an input constraint (hereinafter also referred to as input set)And corresponding output limit->The verification problem needs to prove that the following relationship holds:
in practice, the output limit Y represents a safety limit, i.e. any point in the complement of Y represents an unsafe parameter value. The above relationship holds true if the set R reached after each point in X passes through the neural network is included in Y, otherwise it does not hold true. Here, theHowever, the process of accurately calculating R is in super polynomial time, so that it is necessary to calculate an overestimate of R>(to be referred to as reachable collection estimation hereinafter), satisfies +.>Then pass inspection->Whether or not to establish->Whether or not it is. If->Then consider->Establishment; otherwise considerAnd does not hold. Obviously if->Then there must be +.>The conclusion is correct. But if->WhileWhen this approach comes to the conclusion that it is incorrect. Thus, such algorithms are reliable, but not complete.
One method of verifying the security of a deep neural network is reliable, meaning that the network that it determines to be secure must in fact also be secure (potentially misjudging a virtually secure network as not secure). One way to verify the security of a deep neural network is complete, meaning that any virtually secure network will be judged to be secure by it (potentially misjudging virtually unsafe networks as secure).
Based on this, according to an embodiment of the present application, there is disclosed an automatic driving neural network robustness verification method based on dimension reduction, including:
generating a hyper-rectangular input set based on the input image data;
dividing an image set of the input set under affine transformation of the first layer of the neural network according to a given width constraint delta, and searching for whether a subset which does not meet the robustness requirement exists in the image set;
if not, the autopilot neural network is considered safe; otherwise, the autopilot neural network is considered unsafe.
Specifically, the robustness verification optimization method of the automatic driving use neural network based on dimension reduction comprises the following steps:
step one: generating a hyper-rectangular input set by using normal input data;
the method specifically comprises the following substeps:
step 1.1: selecting a normal input data and normalizing;
wherein the input data is assumed to be normal and contains a total k such as height, weight, blood pressure, etc 0 Data, consider these data as a k 0 The dimension vector, taken as the center of the neighborhood, is denoted as c * The j-th component of it is noted asLet->The possible range of values of (2) is +.>Then the normalized value is +.>From all c j (j=1,2,…, 0 ) The vector c is c * Normalized results.
Step 1.2: the allowable error is selected.
Wherein, assuming that the error size we allow is e, then for data item c j (j=1,2,…,k 0 ) Any of the compounds in l j =c j -e and u j =c j Values between +e are allowed. From k 0 K is composed of e 0 The dimension vector is denoted as r.
Step 1.3: an input set is generated from the normal data and the allowed error.
Wherein the input set is a hyper-rectangle defined by |x-c|r, denoted as X, and the hyper-rectangle can be written as [ l, u ]]L and u are each represented by l j And u is equal to j (=1,2,…,k 0 ) Vectors are constructed which are the lowest and highest two vertices of X. Super-rectangle is the popularization of rectangle in high-dimensional space, and the inequality form and interval form of the super-rectangle are in one-to-one correspondence
Step two: the image set of the input set under affine transformation of the first layer of the neural network is segmented according to a given width constraint δ and searched for the presence of a subset that does not meet the robustness requirement. According to existing work, there are two search modes:
search mode a: the image set is divided directly into subsets and it is ensured that the maximum width of each subset is smaller than a given width limit δ, and then subsets among these subsets are searched for that are not robust (i.e. whose output reachable set estimate does not meet the output limit).
Search mode B: the image set (and its subsets) is iteratively halved in a manner similar to a dichotomy search until all subsets are robust or the maximum width of the subsets is less than a given width limit δ.
The two search modes will be described below, respectively.
Referring to fig. 2, the search method a specifically includes the following sub-steps:
step 2a.1: based on the input set and the affine transformation of the first layer of the neural network, a minimum hyper-rectangle (i.e., a range of values for each node of the first layer) containing the image set of the input set at the first layer is calculated. Assume that the input set is a hyper-rectangle [ l ] 0 ,u 0 ]The value range of each node of the first layer can be solved one by one through the following formula:
l 1,j =[w 1,j ] + ·l 0 +[w 1,j ] - ·u 0 +b 1,j
u 1,j =[w 1,j ] + ·u 0 +[w 1,j ] - ·l 0 +b 1,j .
where j=1, 2, …, k 1 ,k 1 Is the number of nodes of the first layer. The minimum hyper-rectangle calculated is [ l ] 1 ,u 1 ]Vector l 1 And u 1 The j-th component of (2) is l 1,j And u 1,j 。[w 1,j ] + Representation pair w 1,j The non-negative value is taken, namely the non-negative element is unchanged, and the negative element is taken as zero. Similarly, [ w ] 1,j ] - Representation pair w 1,j Take a non-positive value.
Step 2a.2: the smallest hyper-rectangle (i.e. the light-colored large rectangle in fig. 8) containing the image set of the input set in the first layer calculated in the previous step is divided into several small hyper-rectangles (also referred to as tiles below) that do not intersect each other according to a given width constraint δ >0, and the width of each tile is not greater than δ.
Step 2a.3: for each block, it is determined whether it is redundant.
Since in the previous step we split the hyper-rectangle usually contains many points that do not belong to the image set, i.e. redundant points, we need to find and exclude those partitions that contain only redundant points, i.e. redundant partitions, in order to improve the accuracy.
Without loss of generality, for the kth partition Q k =[l k ,u k ](k=1,2,…,n all ),z∈[l k ,u k ]Equivalent to the set of inequalities below.
Wherein, the liquid crystal display device comprises a liquid crystal display device,represents k 1 A rank identity matrix.
Then Q k The original image set P at the input layer k The following set of inequalities is satisfied.
Then P k And input set [ l ] 0 ,u 0 ]The intersection of (a) satisfies the following set of inequalities.
Then, a linear programming solver is used to determine whether the intersection is empty. If empty, Q k Is superfluous and is eliminated. Otherwise, Q k Not superfluous.
Step 2a.4: for the remaining (i.e., non-redundant) partitions, determining whether each partition is robust specifically includes the following two sub-steps:
step 2a.4.1: solving an output reachable set estimate corresponding to each unnecessary partition in the first layer, wherein the output reachable set estimate is specifically as follows:
in the last step, we need to determine a certain partition Q in the first layer k Whether or not it is robust. This first requires solving the output reachable set estimate R for this block correspondence k . The main method adopted in this step is interval arithmetic. Because the value of the node on any hidden layer of the neural network only depends on the value of the node on the upper layer, the node can be calculated in a layer-by-layer propagation mode. From Q k =[l k ,u k ]We can obtain Q k The range of values of the variables at any node of the first layer (after activation), i.e.(j=1,2,…,k 0 ). Therefore, the following method can be used to find the value ranges of the nodes of the second layer, and then the value ranges of the nodes of the third layer up to the output layer can be found. The hyper-rectangle formed by the value ranges of all nodes of the output layer is R k . Without loss of generality, we assume that we have found the value range +.>Then the value range of any node j on the i-th layer is +.>This can be found by the following formula:
wherein [ w ] i,j ] + Representation pair w i,j The non-negative value is taken, namely the non-negative element is unchanged, and the negative element is taken as zero. Similarly, [ w ] i,j ] - Representation pair w i,j Take a non-positive value. Thus we canLayer by layer calculation to obtain Q k Corresponding output reachable set estimate R k
Step 2a.4.2: checking whether each output set estimate meets the output limit, specifically as follows:
we have found the partition Q in the last step k Corresponding output reachable set estimate R k Next we will check R in turn k Whether the output limit is met, i.e. is a subset of the given output limit Y. Where Y is a given set of output layers (hyper-rectangular or convex polyhedron) where all element classifications are identical. If there isThen Q k Is robust; otherwise, Q k Is not robust.
A neural network is considered secure if all unnecessary partitions are robust. Otherwise, the neural network is considered unsafe.
Referring to fig. 3, the search method B specifically includes the following sub-steps:
step 2b.1: an empty stack is initialized, denoted S.
Step 2b.2: the smallest hyper-rectangle containing the image set of the input set at the first layer is calculated (see step 2a.1 for details) and this hyper-rectangle is pressed into the top of the stack.
Step 2b.3: it is determined whether the stack S is currently empty. If the stack S is not empty, step 2B.3.1 is performed. Otherwise the neural network is considered secure.
Step 2b.3.1: the pop-up top element Q, Q is a hyper-rectangle. It is determined whether Q is superfluous (see 2A.3 for details). If Q is redundant, return to step 2B.3. Otherwise, step 2b.3.1.1 is performed.
Step 2b.3.1.1: it is determined whether Q is robust (see step 2a.4 for details). If Q is robust, return to step 2B.3. Otherwise, step 2b.3.1.1.1 is performed.
Step 2b.3.1.1.1: it is determined whether the maximum width of Q is less than a given width limit δ. If so, the neural network is considered unsafe. Otherwise, halving Q, pressing two small hyper rectangles into the stack top, and returning to the step 2B.3.
Taking a neural network for an image recognition task (autopilot needs to recognize signal lights in a captured image or pedestrians and other vehicles on the road) as an example, we select a 1024 x 1023 pet cat color picture as a normal input, as shown in fig. 4.
First, the resolution is changed to 224×224 to adapt to the input of the neural network; the modified resolution picture is a 224 x 3 three-dimensional tensor.
We can reshape the tensor into a vector, denoted x, in a fixed order (e.g. dividing the three-dimensional tensor into three 224 x 224 matrices, each matrix into rows of vectors, and connecting all vectors end to end) 0
We verified that when a small perturbation is added to this sample, the predicted outcome (classification) of the neural network does not change, i.e. is still a "cat".
Step one: according to the data in the above table, taking the neighborhood centerFor ease of calculation, the error magnitude e=2 (not normalized) pixel values are taken, from which the input set X, i.e. the hyper-rectangle +.>(after normalization).
Step two: for ease of illustration, without loss of generality, a network having three nodes in the input layer and two nodes in the first layer is an example of how to search for a subset that is not robust in the first layer. The parameters of the affine transformation of the first layer of the neural network are as follows:
the input set is a unit cube, i.e. [0 ] 3 ,1 3 ]Wherein 0 is 3 =[0,0,0],1 3 =[1,1,1]It is in the first layerAs shown by the dark hexagon in the following figures, the light rectangle is the smallest (super) rectangle containing the image set. The minimum hyper-rectangle containing the image set is calculated as follows.
[w 1,1 ] + =[1.8,0,0],[w 1,1 ] - =[0,-4,-2],[w 1,2 ] + =[1.7,6,0],[w 1,2 ] - =[0,0,-8]
l 1,1 =[w 1,1 ] + ·0 3 +[w 1,1 ] - ·1 3 =-6
u 1,1 =[w 1,1 ] + ·1 3 +[w 1,1 ] - ·0 3 =1.8
l 1,2 =[w 1,2 ] + ·0 3 +[w 1,2 ] - ·1 3 =-8
u 1,2 =[w 1,2 ] + ·1 3 +[w 1,2 ] - ·0 3 =7.7
Search mode a: we choose δ=1 to divide the rectangle in fig. 5 into 128 rectangular segments (square except on the top and right). In fig. 5, the dark hexagon is an image set and the light rectangle is the smallest hyper-rectangle containing the image set; this light rectangle will be split later. FIG. 6 (a) is a graph showing the result of halving the hyper-rectangle of FIG. 5; fig. 6 (b) shows the result of the image set after segmentation.
Search mode B: the initialization stack S is empty and the large rectangle in the upper diagram is pushed onto the top of the stack.
At this point the stack is not empty and the top of stack element is popped (i.e., the large rectangle of fig. 5). This element is not redundant and it is assumed to be neither robust nor has its maximum width greater than the width limit, so we halve it to get the next two small rectangles, which are pressed into the top of the stack in turn.
At this point the stack is not empty and the top of stack element (i.e., the rectangle in fig. 6 (b)) pops up. Assuming that this rectangle is not redundant and it is not robust, and its maximum width is greater than the width limit, we then halve it to get two small rectangles in fig. 7, which are pressed into the top of the stack in turn.
The subsequent operations are not repeated.
Step 2a.3: the following illustrates how to determine whether a block is an unnecessary block, respectively. (applicable to two search modes)
1. Redundant block instance
The dark squares in fig. 8 and 9 represent different partitions that are partitioned (large rectangles) out during search mode a run-time. The tiles in fig. 8 have no intersection with the image set (hexagon) and therefore are ignored. The tiles in fig. 9 have intersections with the image sets (hexagons) and so subsequent steps (2a.4.1) and (2a.4.2) will be performed on them.
As shown in fig. 8, gray squares (hereinafter, Q 1 ) It is a redundant block (intuitively, it does not intersect the hexagons (the image sets of the input set) in the figure).
According to the method of the application, in order to determine Q 1 Whether it is redundant block or not, the original image set P of the input layer needs to be calculated 1 . Obviously Q 1 Upper and lower boundary l of (2) 1 And u is equal to 1 Respectively [0,5 ]]And [1,6 ]]P according to the formula in step 2.4 1 The following set of inequalities is satisfied.
Then P 1 The intersection with the input set satisfies the following set of inequalities.
Solving for any one of the linear constraints (e.g., x) in the intersection using an existing linear programming solver 1 +x 2 +x 3 Wherein x= [ x ] 1 ,x 2 ,x 3 ]) Will not solve (because the intersection (i.e., the field) is empty). Thus, Q can be determined 1 Is redundant blocking.
2. Non-redundant blocking examples
As shown in fig. 9, gray squares (hereinafter, Q 2 ) It is a redundant block (intuitively, it does not intersect the hexagons (the image sets of the input set) in the figure).
According to the method of the application, in order to determine Q 2 Whether it is redundant block or not, the original image set P of the input layer needs to be calculated 2 . Obviously Q 2 Upper and lower boundary l of (2) 2 And u is equal to 2 Respectively [0,0 ]]And [1, 1]]P according to the formula in step 2.4 2 The following set of inequalities is satisfied.
Then P 2 The intersection with the input set satisfies the following set of inequalities.
Solving for any one of the linear constraints (e.g., x) in the intersection using an existing linear programming solver 1 +x 2 +x 3 Wherein x= [ x ] 1 ,x 2 ,x 3 ]) The corresponding solution will be found (since the intersection (i.e. the field) is not empty). Thus, Q can be determined 2 Non-redundant blocking.
Step 2a.4.1: the following illustrates how to solve the output reachable set estimate of a certain partition.
Without loss of generality, we show, by way of example, the process of solving a given hyper-rectangle layer-by-layer propagation for two adjacent layers (i-1 layer and i layer (i > 2), each layer having two nodes:
fig. 10 is a structure of a neural network, the parameters of which are as follows:
the activation function is set to ReLU.
Assuming that the range of x1 is 0,1 and the range of x2 is-2, -1, i.e.
The corresponding i-th layer hyper-rectangle calculation procedure is as follows:
[w i,1 ] + =[1,2],[w i,1 ] - =[0,0],[w i,2 ] + =[0,0],[w i,2 ] - =[-1,0]
l i,1 =σ i,1 ([w i,1 ] + ·l i-1 +[w i,1 ] - ·u i-1 +b i,1 )=-3
u i,1 =σ i,1 ([w i,1 ] + ·u i-1 +[w i,1 ] - ·l i-1 +b i,1 )=0
l i,2 =σ i,2 ([w i,2 ] + ·l i-1 +[w i,2 ] - ·u i-1 +b i,2 )=-2
u i,2 =σ i,2 ([w i,2 ] + ·u i-1 +[w i,2 ] - ·l i-1 +b i,2 )=-1
i.e. the i-th layer hyper-rectangle is the rectangle R: { (y) 1 ,y 2 )|-3≤y 1 ≤0,-2≤y 2 ≤-1}。
The network computing process for more points is similar.
Step 2a.4.2: the following illustrates how the output reachable set estimate of a partition is determined to be robust.
Continuing with the example of step three, assume here that the ith layer is the output layer and that the output limit Y is a rectangle { (Y) 1 ,y 2 )|-5≤y 1 ≤5,-4≤y 2 4 ∈4}, then it is apparent that there areThis neural network is therefore secure.
The method can effectively reduce the time complexity and improve the running speed.
Example two
According to an embodiment of the application, an automatic driving neural network robustness verification system based on dimension reduction is disclosed, comprising:
means for generating a hyper-rectangular input set based on the input image data;
means for segmenting the image set of the input set under affine transformation of the first layer of the neural network according to a given width constraint δ and searching therein whether there is a subset that does not meet the robustness requirement; if not, the autopilot neural network is considered safe; otherwise, the autopilot neural network is considered unsafe.
It should be noted that, the specific implementation manner of the above module has been described in the first embodiment, and will not be described again.
Example III
In one or more embodiments, a terminal device is disclosed that includes a server including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the dimension-reduction-based autopilot neural network robustness verification method of embodiment one when executing the program. For brevity, the description is omitted here.
It should be understood that in this embodiment, the processor may be a central processing unit CPU, and the processor may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, off-the-shelf programmable gate array FPGA or other programmable logic device, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include read only memory and random access memory and provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory. For example, the memory may also store information of the device type.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software.
The method for verifying the robustness of the autopilot neural network based on dimension reduction in the first embodiment can be directly embodied as the execution completion of a hardware processor or the execution completion of the combination execution of hardware and software modules in the processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method. To avoid repetition, a detailed description is not provided herein.
Example IV
In one or more embodiments, a computer-readable storage medium is disclosed, in which a plurality of instructions are stored, the instructions being adapted to be loaded by a processor of a terminal device and to perform the dimension-reduction-based autopilot neural network robustness verification method described in embodiment one.
While the foregoing description of the embodiments of the present application has been presented in conjunction with the drawings, it should be understood that it is not intended to limit the scope of the application, but rather, it is intended to cover all modifications or variations within the scope of the application as defined by the claims of the present application.

Claims (7)

1. The automatic driving neural network robustness verification method based on dimension reduction is characterized by comprising the following steps of:
generating a hyper-rectangular input set based on the input image data;
dividing the image set of the input set under affine transformation of the first layer of the neural network according to a given width constraint delta and searching for the presence or absence of a subset that does not meet the robustness requirement therein, comprising in particular:
dividing the image set into a plurality of subsets, ensuring that the maximum width of each subset is smaller than a given width limit delta, and then searching for a non-robust subset among the subsets, wherein the output reachable set of the non-robust subset is estimated to not meet the output limit;
if not, the autopilot neural network is considered safe; otherwise, the automatic driving neural network is considered unsafe;
calculating the minimum hyper-rectangle containing the image set of the input set in the first layer according to the input set and affine transformation of the first layer of the neural network;
dividing the minimum hyper-rectangle calculated in the previous step into a plurality of small hyper-rectangles which are mutually disjoint, namely dividing blocks according to a given width limit delta > 0; and each segment has a width no greater than δ;
for each block, judging whether it is redundant;
judging whether each block is robust for non-redundant blocks;
for each block, judging whether it is redundant or not, specifically including:
for the kth block Q k The original image set P of the input layer k And input set [ l ] 0 ,u 0 ]The intersection of (a) satisfies the following set of inequalities:
judging whether the intersection is empty by using a linear programming solver, and if so, judging that Q k Is superfluous and is eliminated; otherwise, Q k Not superfluous.
2. The dimension-reduction-based automatic driving neural network robustness verification method according to claim 1, wherein the generating of the super-rectangular input set based on the input image data specifically comprises:
reorganizing the input image data into a k 0 Vector dimension and normalization;
determining an allowable error, and generating an input set according to the normalized vector c and the allowable error r; the input set is a hyper-rectangle determined by |x-c|r.
3. The method for verifying the robustness of the automatic driving neural network based on dimension reduction according to claim 1, wherein for non-redundant blocks, judging whether each block is robust or not comprises the following steps:
solving output reachable set estimation corresponding to each unnecessary partition in the first layer by adopting a layer-by-layer propagation mode;
sequentially checking whether output reachable set estimation corresponding to each block meets output limit, if so, the block is robust; otherwise, the partition is not robust;
if all unnecessary partitions are robust, the neural network is considered secure; otherwise, the neural network is considered unsafe.
4. A method for verifying robustness of a dimension-based automatic driving neural network according to claim 1, wherein the steps of dividing the image set of the input set under affine transformation of the first layer of the neural network according to a given width constraint δ and searching for the presence or absence of a subset that does not meet the robustness requirement comprise:
step 101: initializing an empty stack, and recording as S;
step 102: calculating a minimum hyper-rectangle containing the image set of the input set in the first layer, and pressing the hyper-rectangle into the stack top;
step 103: judging whether the stack S is currently empty or not; if the stack S is not empty, execute step 1031; otherwise, the neural network is considered to be safe;
step 1031: ejecting a stack top element Q, and judging whether the Q is redundant; if Q is redundant, returning to step 103; otherwise, go to step 1032;
step 1032: judging whether Q is robust or not, if so, returning to the step 103; otherwise, go to step 1033;
step 1033: judging whether the maximum width of Q is smaller than a given width limit delta; if so, the neural network is considered unsafe; otherwise, halving Q and pressing both small hyper-rectangles into the top of the stack, returning to step 103.
5. An automatic driving neural network robustness verification system based on dimension reduction, which is characterized by comprising:
means for generating a hyper-rectangular input set based on the input image data;
means for segmenting the image set of the input set under affine transformation of the first layer of the neural network according to a given width constraint δ and searching therein whether there is a subset that does not meet the robustness requirement; if not, the autopilot neural network is considered safe; otherwise, the automatic driving neural network is considered unsafe; the method specifically comprises the following steps:
dividing the image set into a plurality of subsets, ensuring that the maximum width of each subset is smaller than a given width limit delta, and then searching for a non-robust subset among the subsets, wherein the output reachable set of the non-robust subset is estimated to not meet the output limit;
calculating the minimum hyper-rectangle containing the image set of the input set in the first layer according to the input set and affine transformation of the first layer of the neural network;
dividing the minimum hyper-rectangle calculated in the previous step into a plurality of small hyper-rectangles which are mutually disjoint, namely dividing blocks according to a given width limit delta > 0; and each segment has a width no greater than δ;
for each block, judging whether it is redundant;
judging whether each block is robust for non-redundant blocks;
for each block, judging whether it is redundant or not, specifically including:
for the kth block Q k The original image set P of the input layer k And input set [ l ] 0 ,u 0 ]The intersection of (a) satisfies the following set of inequalities:
judging whether the intersection is empty by using a linear programming solver, and if so, judging that Q k Is superfluous and is eliminated; otherwise, Q k Not superfluous.
6. A terminal device comprising a processor and a memory, the processor being configured to implement instructions; a memory for storing a plurality of instructions, wherein the instructions are adapted to be loaded by a processor and to perform the dimension reduction based autopilot neural network robustness verification method of any one of claims 1-4.
7. A computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor of a terminal device and to perform the dimension-reduction based autopilot neural network robustness verification method of any one of claims 1-4.
CN202110741891.6A 2021-06-30 2021-06-30 Automatic driving neural network robustness verification method and system based on dimension reduction Active CN113469339B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110741891.6A CN113469339B (en) 2021-06-30 2021-06-30 Automatic driving neural network robustness verification method and system based on dimension reduction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110741891.6A CN113469339B (en) 2021-06-30 2021-06-30 Automatic driving neural network robustness verification method and system based on dimension reduction

Publications (2)

Publication Number Publication Date
CN113469339A CN113469339A (en) 2021-10-01
CN113469339B true CN113469339B (en) 2023-09-22

Family

ID=77877130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110741891.6A Active CN113469339B (en) 2021-06-30 2021-06-30 Automatic driving neural network robustness verification method and system based on dimension reduction

Country Status (1)

Country Link
CN (1) CN113469339B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110149333A (en) * 2019-05-23 2019-08-20 桂林电子科技大学 A kind of network security situation evaluating method based on SAE+BPNN
WO2020095321A2 (en) * 2018-11-06 2020-05-14 Vishwajeet Singh Thakur Dynamic structure neural machine for solving prediction problems with uses in machine learning
CN112232126A (en) * 2020-09-14 2021-01-15 广东工业大学 Dimension reduction expression method for improving variable scene positioning robustness
CN112488205A (en) * 2020-11-30 2021-03-12 桂林电子科技大学 Neural network image classification and identification method based on optimized KPCA algorithm
CN112733941A (en) * 2021-01-12 2021-04-30 山东大学 Medical use neural network robustness verification method and system based on shell protection
CN112784915A (en) * 2021-01-29 2021-05-11 北京工业大学 Image classification method for enhancing robustness of deep neural network by optimizing decision boundary

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020095321A2 (en) * 2018-11-06 2020-05-14 Vishwajeet Singh Thakur Dynamic structure neural machine for solving prediction problems with uses in machine learning
CN110149333A (en) * 2019-05-23 2019-08-20 桂林电子科技大学 A kind of network security situation evaluating method based on SAE+BPNN
CN112232126A (en) * 2020-09-14 2021-01-15 广东工业大学 Dimension reduction expression method for improving variable scene positioning robustness
CN112488205A (en) * 2020-11-30 2021-03-12 桂林电子科技大学 Neural network image classification and identification method based on optimized KPCA algorithm
CN112733941A (en) * 2021-01-12 2021-04-30 山东大学 Medical use neural network robustness verification method and system based on shell protection
CN112784915A (en) * 2021-01-29 2021-05-11 北京工业大学 Image classification method for enhancing robustness of deep neural network by optimizing decision boundary

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习的雷达图像目标识别研究进展;潘宗序 等;中国科学: 信息科;第49卷(第12期);全文 *

Also Published As

Publication number Publication date
CN113469339A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
CN110431566B (en) Probability-based director
US9811718B2 (en) Method and a system for face verification
Azizpour et al. Factors of transferability for a generic convnet representation
US20220019870A1 (en) Verification of classification decisions in convolutional neural networks
US20210142168A1 (en) Methods and apparatuses for training neural networks
US20200218971A1 (en) Training of deep neural networks on the basis of distributions of paired similarity measures
CN110309911B (en) Neural network model verification method and device, computer equipment and storage medium
EP3620980B1 (en) Learning method, learning device for detecting lane by using cnn and testing method, testing device using the same
US20240135139A1 (en) Implementing Traditional Computer Vision Algorithms as Neural Networks
WO2015192263A1 (en) A method and a system for face verification
US10275667B1 (en) Learning method, learning device for detecting lane through lane model and testing method, testing device using the same
CN111160523B (en) Dynamic quantization method, system and medium based on characteristic value region
CN112232426A (en) Training method, device and equipment of target detection model and readable storage medium
US20210042613A1 (en) Techniques for understanding how trained neural networks operate
US12019714B2 (en) Structure detection models
Raza et al. A parallel rough set based dependency calculation method for efficient feature selection
US11188705B2 (en) Pin accessibility prediction engine
CN112215298A (en) Model training method, device, equipment and readable storage medium
CN111666931A (en) Character and image recognition method, device and equipment based on mixed convolution and storage medium
KR20240025578A (en) Hyperspectral image classification method and appratus using neural network
CN113469339B (en) Automatic driving neural network robustness verification method and system based on dimension reduction
Song et al. Retrieving geometric information from images: the case of hand-drawn diagrams
CN114170465A (en) Attention mechanism-based 3D point cloud classification method, terminal device and storage medium
CN116457794A (en) Group balanced sparse activation feature map for neural network model
US20240037910A1 (en) Quantum Method and System for Classifying Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant