CN116150038A - Neuron sensitivity-based white-box test sample generation method - Google Patents

Neuron sensitivity-based white-box test sample generation method Download PDF

Info

Publication number
CN116150038A
CN116150038A CN202310420554.6A CN202310420554A CN116150038A CN 116150038 A CN116150038 A CN 116150038A CN 202310420554 A CN202310420554 A CN 202310420554A CN 116150038 A CN116150038 A CN 116150038A
Authority
CN
China
Prior art keywords
neuron
neurons
layer
test sample
disturbance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310420554.6A
Other languages
Chinese (zh)
Other versions
CN116150038B (en
Inventor
练智超
田凤君
毛锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202310420554.6A priority Critical patent/CN116150038B/en
Publication of CN116150038A publication Critical patent/CN116150038A/en
Application granted granted Critical
Publication of CN116150038B publication Critical patent/CN116150038B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3676Test management for coverage analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Complex Calculations (AREA)

Abstract

The invention discloses a method for generating a white-box test sample based on neuron sensitivity, and belongs to the field of artificial intelligence safety. Obtaining a minimum distortion disturbance of the model; selecting coverage rate indexes to update the test sample set, and generating a preliminary test sample queue; the nerve network prediction result is reversely propagated, and the importance of the nerve cells is calculated layer by layer; and selecting neurons sensitive to disturbance change in the model through a neural network verification technology, and generating a test sample set of the final model. According to the invention, the characteristic that the neuron boundaries are different from disturbance changes is utilized to extract neurons with large boundary changes in the model, the neurons are defined as neurons with sensitivity, and the coverage rate index is selected to conduct guidance, and meanwhile, fixed area disturbance is added to the picture in the sample set, so that the coverage rate of the neurons is ensured not to fall, the sensitivity of the samples to disturbance is improved, and the coverage rate of a neural network is improved.

Description

Neuron sensitivity-based white-box test sample generation method
Technical Field
The invention relates to a test sample generation method, in particular to a test sample generation method based on neuron sensitivity.
Background
The deep neural network is based on data driving, the decision behavior result of the model is determined by training data to a great extent, and the deep neural network is widely used along with the deep neural network achieving good results on tasks such as image classification, target detection, image segmentation, natural language processing, recommendation systems and the like. Various software also frames a deep learning model, and people also use deep learning to replace more traditional machine learning work.
More and more studies have demonstrated that deep learning models are fragile, which makes deep neural networks unreliable for many times. For example, in an unmanned system, the model cannot accurately identify pedestrians or signal lamps, so that traffic accidents occur. Because of lack of a conducting process for deeply analyzing disturbance information in the neural network model, especially nonlinear transmission in the middle layer of the neural network, weak directional disturbance is caused to interfere with the output of the neural network model, so that sensing attack effects such as avoidance, misleading and the like are realized. In addition, because the content of the neuron semantic information in the neural network is low, the interpretability is weak, the generalization of the deep neural model is weak, and the data sample is seriously relied on. In recent years, many researchers at home and abroad take the traditional software testing technology as a heuristic, continuously perfecting a reliability testing index system and a method flow and providing a new testing framework. The researches are used for referencing the traditional software static test method, and the correctness of the internal logic of the model is revealed by analyzing the internal structure and the hierarchical relationship of the model; various test sample generation technologies are proposed, and a large number of samples with pertinence are used for exploring the behavior space of the model, so that indexes such as safety, robustness and reliability of the model are effectively evaluated.
In conventional software testing, it is important to generate test cases that can cover code and crash the program as much as possible. However, in deep neural network testing, given a test case, the model always yields a predicted result. This makes it difficult to know whether the generated test sample is good enough to cover the code. Based on the ideas covered in conventional software testing, some researchers have proposed standards for neuron coverage to cause neurons in a neural network to be activated as much as possible. In addition, other overlay standards, such as KMNC and NBC, have been proposed by some researchers. Subsequently, researchers have developed test sample generation techniques based on these criteria. However, existing approaches ignore the relationship between input perturbations and output results. Therefore, there is a need to combine the relationship of perturbation and neural network to design a sensitivity-based test sample generation method.
Disclosure of Invention
The invention solves the technical problems that: a test sample generation method for a white-box model using sensitivity importance neurons is provided.
The technical scheme is as follows: in order to solve the technical problems, the invention adopts the following technical scheme:
the method for generating the white-box test sample based on the neuron sensitivity comprises the following steps:
step 1: obtaining minimum distortion disturbance of a training sample set;
step 2: selecting coverage rate indexes to update the test sample set, and generating a preliminary test sample queue;
step 3: the nerve network prediction result is reversely propagated, and the importance of the nerve cells is calculated layer by layer;
step 4: and selecting neurons sensitive to disturbance change in the model through a neural network verification technology, and generating a test sample set of the final model.
Further, in step 1, the minimum distortion disturbance of the training sample set is obtained, by the following method:
in the process of searching the minimum distortion disturbance of the training sample set, the disturbance is added in the original training set
Figure SMS_1
And inputting the original sample into an attack network, and utilizing forward propagation of the neural network, and obtaining the upper and lower bounds of each neuron by forward propagation of the upper and lower bounds of the original sample.
Further, the upper and lower bounds of the original sample are expressed as:
Figure SMS_2
wherein ,
Figure SMS_3
representing the upper bound of neurons,/->
Figure SMS_4
Representing the lower bound of neurons,/->
Figure SMS_5
Representing disturbance(s)>
Figure SMS_6
Representing the original picture.
The upper and lower bounds of each neuron are denoted as:
Figure SMS_7
wherein ,
Figure SMS_8
is the corresponding neural network hidden layer operation, +.>
Figure SMS_11
Layer representing neural network, ++>
Figure SMS_12
Indicate->
Figure SMS_14
Individual neurons, ->
Figure SMS_15
Is standard +.>
Figure SMS_16
Basis vector,/->
Figure SMS_17
Representation->
Figure SMS_9
Transpose (S)>
Figure SMS_10
Representing the output of neurons,/->
Figure SMS_13
And (3) representing the calculation result of the next hidden layer aiming at the model output of the previous layer, so that the upper and lower bounds of each class are not crossed, namely the minimum distortion disturbance.
The upper and lower bounds of each neuron can be obtained by the obtained minimum distortion disturbance, and the upper bounds of each neuron form a neuron boundary set
Figure SMS_18
The lower bounds of each neuron are combined into a neuron boundary set +.>
Figure SMS_19
,/>
Figure SMS_20
Is->
Figure SMS_21
Layer (S)>
Figure SMS_22
Indicate->
Figure SMS_23
And neurons.
Further, in step 2, the coverage rate index is selected to update the test sample set, and a preliminary test sample queue is generated, and the method is as follows:
step 2.1: selecting a coverage rate criterion as a judging basis for generating a new queue;
step 2.2: selecting a seed selection strategy for sorting seeds;
step 2.3: judging the coverage rate index, and putting the coverage rate index which is larger than the original index into a queue, and performing loop iteration to generate a preliminary test sample queue.
Further, in step 3, the neural network prediction result is back-propagated, the importance of the neurons is calculated layer by layer, and the importance calculation method of each neuron in the neural network can be calculated through the relevance score:
Figure SMS_24
wherein f (x) represents the output of the model, the th
Figure SMS_26
Layer->
Figure SMS_28
Individual neurons, the correlation of which is defined by +.>
Figure SMS_30
A representation; />
Figure SMS_32
Equal to->
Figure SMS_33
Neurons of the layer->
Figure SMS_34
The sum of the correlations of all neurons related such that +.>
Figure SMS_35
Correlation of the nth layer->
Figure SMS_25
Is from the last layer of the neural network +.>
Figure SMS_27
Is->
Figure SMS_29
Counter-propagating to the first layer->
Figure SMS_31
All neurons in (a) including an input mapLike, the importance neurons are determined.
Further, in step 4, neurons sensitive to disturbance change in the model are selected through neural network verification technology, and a final test sample set of the model is generated, and the method is as follows:
step 4.1: the importance determined first according to step 3 is determined by correlation
Figure SMS_36
Indicating that the neuron contributes to the prediction result of the model when the value of the correlation is not 0, thereby defining the importance neuron set IN:
Figure SMS_37
step 4.2: by adjusting minimum distortion disturbance
Figure SMS_39
According to the conversion factor->
Figure SMS_40
Obtaining newly added distortion disturbance>
Figure SMS_41
Obtaining new neuron upper and lower bounds after obtaining newly added distortion disturbance>
Figure SMS_42
、/>
Figure SMS_43
Obtaining a boundary variation ratio with boundary variation>
Figure SMS_44
Along->
Figure SMS_45
The boundary change of each neuron is different, and it is determined that when the boundary change ratio is greater than +.>
Figure SMS_38
When the neuron is sensitive to the disturbance, a sensitive neuron set SN is obtained:
Figure SMS_46
wherein ,
Figure SMS_49
is->
Figure SMS_51
Layer (S)>
Figure SMS_52
Is->
Figure SMS_54
A neuron; />
Figure SMS_57
and />
Figure SMS_58
Are respectively->
Figure SMS_59
Layer->
Figure SMS_47
An original upper bound and an original lower bound of the individual neurons; />
Figure SMS_48
,/>
Figure SMS_50
Respectively by->
Figure SMS_53
Modulated->
Figure SMS_55
Layer->
Figure SMS_56
New upper bound and new lower bound of individual neuronsAnd (5) a boundary.
Step 4.3: obtaining the importance and sensitivity neuron sets IN and IN according to step 4.1, step 4.2
Figure SMS_60
And selecting sensitive neurons and importance neurons in each layer, mapping each selected neuron onto a test case, iterating the loop, terminating when the coverage rate is improved, and finally generating a final test sample set.
The beneficial effects are that: compared with the prior art, the invention has the following advantages:
(1) The invention provides a white-box test sample generation method based on neuron sensitivity. When a test sample is generated, the invention combines a correlation layer-by-layer propagation technology and a neural network verification technology to balance the sensitivity and the selection degree of the importance neurons. The correlation layer-by-layer propagation technology can calculate the influence of each neuron in the neural network on the model prediction result through the model prediction result, quantitatively calculate the correlation value of each neuron and obtain the importance of each neuron. The neural network verification technology is to calculate the prediction boundary of the neural network by adding distortion disturbance, and separate each class in the prediction result by adjusting the magnitude of the distortion disturbance to obtain the minimum distortion disturbance of the neural network.
(2) The invention provides a new coverage rate criterion-sensitivity importance neuron coverage rate criterion
Figure SMS_61
. Defining a new coverage rate criterion, expanding the coverage rate criterion of the neural network, and applying the new coverage rate to guide the generation of test samples.
(3) Compared with other migratable test sample generation methods, the method has the advantages that the characteristic that the neuron boundary is different from disturbance change is utilized to extract neurons with large boundary change in the model, the neurons are defined as sensitive neurons, the coverage rate index is selected for guidance, and meanwhile, fixed area disturbance is added to the picture in the sample set, so that the coverage rate of the neurons is ensured not to fall, the sensitivity of the sample to disturbance is improved, and the coverage rate of a neural network is improved. The coverage rate of the sensitive importance is higher while the coverage rate effect of the neurons is ensured.
Drawings
FIG. 1 is a flow chart of a method for generating a white-box test sample based on neuron sensitivity according to the present invention.
Detailed Description
The invention will be further illustrated with reference to specific examples, which are carried out on the basis of the technical solutions of the invention, it being understood that these examples are only intended to illustrate the invention and are not intended to limit the scope thereof.
The invention relates to a method for generating a white-box test sample based on neuron sensitivity, which comprises the steps of firstly obtaining minimum distortion disturbance of a model; selecting coverage rate indexes to update the test sample set, and generating a preliminary test sample queue; the nerve network prediction result is reversely propagated, and the importance of the nerve cells is calculated layer by layer; and selecting neurons sensitive to disturbance change in the model through a neural network verification technology, and generating a test sample set of the final model. The method specifically comprises the following steps:
step 1: obtaining minimum distortion disturbance of a training sample set; the method comprises the following steps:
in the process of searching the minimum distortion disturbance of the training sample set, the disturbance is added in the original training set
Figure SMS_62
And inputting the original sample into an attack network, and utilizing forward propagation of the neural network, and obtaining the upper and lower bounds of each neuron by forward propagation of the upper and lower bounds of the original sample.
The upper and lower bounds of the original sample are expressed as:
Figure SMS_63
wherein ,
Figure SMS_64
representing the upper bound of neurons,/->
Figure SMS_65
Representing the lower bound of neurons,/->
Figure SMS_66
Representing disturbance(s)>
Figure SMS_67
Representing the original picture.
The upper and lower bounds of each neuron are denoted as:
Figure SMS_68
wherein ,
Figure SMS_70
is the corresponding neural network hidden layer operation, +.>
Figure SMS_72
Layer representing neural network, ++>
Figure SMS_74
Indicate->
Figure SMS_75
Individual neurons, ->
Figure SMS_76
Is standard +.>
Figure SMS_77
Basis vector,/->
Figure SMS_78
Representation->
Figure SMS_69
Transpose (S)>
Figure SMS_71
Representing the output of neurons,/->
Figure SMS_73
And (3) representing the calculation result of the next hidden layer aiming at the model output of the previous layer, so that the upper and lower bounds of each class are not crossed, namely the minimum distortion disturbance.
The upper and lower bounds of each neuron can be obtained by the obtained minimum distortion disturbance, and the upper bounds of each neuron form a neuron boundary set
Figure SMS_79
The lower bounds of each neuron are combined into a neuron boundary set +.>
Figure SMS_80
,/>
Figure SMS_81
Is->
Figure SMS_82
Layer (S)>
Figure SMS_83
Indicate->
Figure SMS_84
And neurons.
Step 2: selecting coverage rate indexes to update the test sample set, and generating a preliminary test sample queue; the method comprises the following steps:
step 2.1: selecting a coverage rate criterion as a judging basis for generating a new queue;
coverage criteria are specifically sensitivity importance neuron coverage criteria:
Figure SMS_85
in the formula ,
Figure SMS_86
representing a set of neurons in a neural network, +.>
Figure SMS_87
Representing a function that calculates the number of neurons.
Step 2.2: a seed selection strategy is selected for seed ordering, random seed selection may be selected, or the probability of first arriving seeds being selected is greater according to the time of seed addition to the queue, and the probability of being selected is smaller as the time of seed addition to the queue is longer.
Step 2.3: judging the coverage rate index, and putting the coverage rate index which is larger than the original index into a queue, and performing loop iteration to generate a preliminary test sample queue.
Step 3: the nerve network prediction result is back-propagated, the importance of the nerve cells is calculated layer by layer, and the importance calculation method of each nerve cell in the nerve network can calculate through the relevance score:
Figure SMS_88
wherein f (x) represents the output of the model, the th
Figure SMS_89
Layer->
Figure SMS_90
Individual neurons, the correlation of which is defined by +.>
Figure SMS_92
A representation; />
Figure SMS_95
Equal to->
Figure SMS_96
Neurons of the layer->
Figure SMS_98
The sum of the correlations of all neurons related such that +.>
Figure SMS_99
Correlation of the nth layer->
Figure SMS_91
Is from the neural networkThe last layer of the collaterals->
Figure SMS_93
Is->
Figure SMS_94
Counter-propagating to the first layer->
Figure SMS_97
The importance neurons are determined by all neurons in the system, including the input image.
Step 4: selecting neurons sensitive to disturbance change in the model through a neural network verification technology, and generating a final test sample set of the model; the method comprises the following steps:
step 4.1: the importance determined first according to step 3 is determined by correlation
Figure SMS_100
Indicating that the neuron contributes to the prediction result of the model when the value of the correlation is not 0, thereby defining the importance neuron set IN:
Figure SMS_101
step 4.2: by adjusting minimum distortion disturbance
Figure SMS_102
According to the conversion factor->
Figure SMS_104
Obtaining newly added distortion disturbance>
Figure SMS_105
Obtaining new neuron upper and lower bounds after obtaining newly added distortion disturbance>
Figure SMS_106
、/>
Figure SMS_107
Obtaining a boundary variation ratio with boundary variation/>
Figure SMS_108
Along->
Figure SMS_109
The boundary change of each neuron is different, and it is determined that when the boundary change ratio is greater than +.>
Figure SMS_103
When the neuron is sensitive to the disturbance, a sensitive neuron set SN is obtained:
Figure SMS_110
wherein ,
Figure SMS_112
is->
Figure SMS_113
Layer (S)>
Figure SMS_115
Is->
Figure SMS_117
A neuron; />
Figure SMS_118
and />
Figure SMS_120
Are respectively->
Figure SMS_122
Layer->
Figure SMS_111
An original upper bound and an original lower bound of the individual neurons; />
Figure SMS_114
,/>
Figure SMS_116
Respectively by->
Figure SMS_119
Modulated->
Figure SMS_121
Layer->
Figure SMS_123
A new upper bound and a new lower bound for the individual neurons.
Step 4.3: obtaining the importance and sensitivity neuron sets IN and IN according to step 4.1, step 4.2
Figure SMS_124
And selecting sensitive neurons and importance neurons in each layer, mapping each selected neuron onto a test case, iterating the loop, terminating when the coverage rate is increased to be greater than the original coverage rate, and finally generating a final test sample set.
The effectiveness and efficiency of the method of the invention were verified by the following experiments:
the evaluation index is neuron coverage and sensitivity importance neuron coverage.
Firstly, selecting a data set, selecting an MNIST data set, wherein the MNIST data set comprises 60000 training data and 10000 test data, and each picture is formed by 28 multiplied by 28 handwritten digital pictures of 0-9. Each picture is in the form of a black matrix, represented by 0, and a white matrix, represented by a floating point number between 0 and 1, the closer to 1, the whiter the color. The present invention then selects the MNIST model as the white box model. The comparison method is a coverage-guided deep hunter sample generation method.
TABLE 1 neuronal coverage under white box model of the invention
Figure SMS_125
TABLE 2 sensitivity importance neuron coverage for different parameters under white-box model of the invention
Figure SMS_126
The results in tables 1 and 2 show that the test samples generated by the method of the present invention do not show a large drop under the criteria of neuronal coverage. It can be found that the samples generated by the method provided by the invention have higher sensitivity importance neuron coverage than the samples generated by deep hunter. In general, the present invention proposes a test sample generation method based on neuronal sensitivity. When generating a new sample, more coverage of neurons in the neural network is achieved using neurons of sensitivity importance as coverage indicators.
The invention combines correlation layer-by-layer propagation technology and neural network verification to balance sensitivity and importance neuron selection degree.
Compared with other migratable test sample generation methods, the method has higher sensibility importance coverage rate while guaranteeing the neuron coverage rate effect.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (9)

1. The method for generating the white-box test sample based on the neuron sensitivity is characterized by comprising the following steps of:
step 1: obtaining minimum distortion disturbance of a training sample set;
step 2: selecting coverage rate indexes to update the test sample set, and generating a preliminary test sample queue;
step 3: the nerve network prediction result is reversely propagated, and the importance of the nerve cells is calculated layer by layer;
step 4: and selecting neurons sensitive to disturbance change in the model through a neural network verification technology, and generating a test sample set of the final model.
2. The method for generating a white-box test sample based on neuron sensitivity according to claim 1, wherein in step 1, a minimum distortion disturbance of a training sample set is obtained, the method comprising:
in the process of searching the minimum distortion disturbance of the training sample set, the disturbance is added in the original training set
Figure QLYQS_1
And inputting the original sample into an attack network, and utilizing forward propagation of the neural network, and obtaining the upper and lower bounds of each neuron by forward propagation of the upper and lower bounds of the original sample.
3. The method for generating a white-box test sample based on neuron sensitivity according to claim 2, wherein upper and lower bounds of the original sample are expressed as:
Figure QLYQS_2
wherein ,/>
Figure QLYQS_3
Representing the upper bound of neurons,/->
Figure QLYQS_4
Representing the lower bound of neurons,/->
Figure QLYQS_5
Representing disturbance(s)>
Figure QLYQS_6
Representing an original picture;
the upper and lower bounds of each neuron are denoted as:
Figure QLYQS_12
wherein ,/>
Figure QLYQS_14
Is the corresponding neural network hidden layer operation, +.>
Figure QLYQS_15
Layer representing neural network, ++>
Figure QLYQS_16
Indicate->
Figure QLYQS_17
Individual neurons, ->
Figure QLYQS_22
Is standard +.>
Figure QLYQS_23
Basis vector,/->
Figure QLYQS_7
Representation->
Figure QLYQS_9
Transpose (S)>
Figure QLYQS_11
Representing the output of neurons,/->
Figure QLYQS_13
Representing the calculation result of the next hidden layer aiming at the output of the former layer model, so that the upper and lower boundaries of each class are the minimum distortion disturbance without crossing; the upper and lower bounds of each neuron can be obtained by the obtained minimum distortion disturbance, and the upper bounds of each neuron form a neuron boundary set +.>
Figure QLYQS_18
The lower bounds of each neuron are combined into a new set of neuron boundaries +.>
Figure QLYQS_19
,/>
Figure QLYQS_20
Is->
Figure QLYQS_21
Layer (S)>
Figure QLYQS_8
Indicate->
Figure QLYQS_10
And neurons.
4. The method for generating white-box test samples based on neuron sensitivity according to claim 1, wherein in step 2, the coverage index is selected to update the test sample set to generate a preliminary test sample queue, and the method comprises the following steps:
step 2.1: selecting a coverage rate criterion as a judging basis for generating a new queue;
step 2.2: selecting a seed selection strategy for sorting seeds;
step 2.3: judging the coverage rate index, and putting the coverage rate index which is larger than the original index into a queue, and performing loop iteration to generate a preliminary test sample queue.
5. The method for generating white-box test samples based on neuron sensitivity according to claim 1, wherein in step 3, the neural network prediction results are back-propagated, the importance of neurons is calculated layer by layer, and the importance calculation method of each neuron in the neural network can be calculated by a relevance score:
Figure QLYQS_24
,/>
wherein f (x) represents the output of the model, the th
Figure QLYQS_27
Layer->
Figure QLYQS_29
Individual neurons, the correlation of which is defined by +.>
Figure QLYQS_30
A representation; />
Figure QLYQS_32
Equal to->
Figure QLYQS_33
Neurons of the layer->
Figure QLYQS_34
The sum of the correlations of all neurons related such that +.>
Figure QLYQS_35
Correlation of the nth layer->
Figure QLYQS_25
Is from the last layer of the neural network +.>
Figure QLYQS_26
Is->
Figure QLYQS_28
Counter-propagating to the first layer->
Figure QLYQS_31
The importance neurons are determined by all neurons in the system, including the input image.
6. The method for generating a white-box test sample based on neuron sensitivity according to claim 1, wherein in step 4, neurons sensitive to disturbance change in a model are selected by a neural network verification technique, and a final test sample set of the model is generated, by the following method:
step 4.1: first, according to the importance of the neurons determined IN step 3, an importance neuron set IN is defined:
step 4.2: by adjusting minimum distortion disturbance
Figure QLYQS_36
According to the conversion factor->
Figure QLYQS_37
Obtaining a boundary change ratio through boundary change to obtain a sensitive neuron SN:
step 4.3: obtaining the importance and sensitivity neuron sets IN and IN according to step 4.1, step 4.2
Figure QLYQS_38
And selecting sensitive neurons and importance neurons in each layer, mapping each selected neuron onto a test case, iterating the loop, terminating when the coverage rate is improved, and finally generating a final test sample set.
7. The method of generating a white-box test sample based on neuronal sensitivity according to claim 2, wherein in step 4.1, the importance is determined by correlation
Figure QLYQS_39
Indicating that the neuron contributes to the prediction result of the model when the value of the correlation is not 0, thereby defining the importance neuron set IN:
Figure QLYQS_40
8. the method of generating a white-box test sample based on neuron sensitivity according to claim 6, wherein in step 4.2, the disturbance is modified by adjusting minimum distortion
Figure QLYQS_41
According to the conversion factor->
Figure QLYQS_42
Obtaining newly added distortion disturbance
Figure QLYQS_43
Obtaining new neuron upper and lower bounds after obtaining newly added distortion disturbance>
Figure QLYQS_44
Obtaining a boundary variation ratio with boundary variation>
Figure QLYQS_45
Along->
Figure QLYQS_46
The boundary change of each neuron is different, and it is determined that when the boundary change ratio is greater than +.>
Figure QLYQS_47
When the neuron is sensitive to the disturbance, the sensitive neuron set SN is obtained.
9. The method of claim 8, wherein the set of sensitive neurons SN is:
Figure QLYQS_48
wherein ,
Figure QLYQS_50
is->
Figure QLYQS_51
Layer (S)>
Figure QLYQS_52
Is->
Figure QLYQS_53
A neuron; />
Figure QLYQS_54
and />
Figure QLYQS_56
Are respectively->
Figure QLYQS_57
Layer->
Figure QLYQS_49
An original upper bound and an original lower bound of the individual neurons; />
Figure QLYQS_55
,/>
Figure QLYQS_58
Respectively by->
Figure QLYQS_59
Modulated->
Figure QLYQS_60
Layer->
Figure QLYQS_61
A new upper bound and a new lower bound for the individual neurons. />
CN202310420554.6A 2023-04-19 2023-04-19 Neuron sensitivity-based white-box test sample generation method Active CN116150038B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310420554.6A CN116150038B (en) 2023-04-19 2023-04-19 Neuron sensitivity-based white-box test sample generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310420554.6A CN116150038B (en) 2023-04-19 2023-04-19 Neuron sensitivity-based white-box test sample generation method

Publications (2)

Publication Number Publication Date
CN116150038A true CN116150038A (en) 2023-05-23
CN116150038B CN116150038B (en) 2023-06-30

Family

ID=86362161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310420554.6A Active CN116150038B (en) 2023-04-19 2023-04-19 Neuron sensitivity-based white-box test sample generation method

Country Status (1)

Country Link
CN (1) CN116150038B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113986717A (en) * 2021-09-29 2022-01-28 南京航空航天大学 Fuzzy testing method and terminal adopting region-based neuron selection strategy
CN115757103A (en) * 2022-11-03 2023-03-07 北京航空航天大学 Neural network test case generation method based on tree structure

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113986717A (en) * 2021-09-29 2022-01-28 南京航空航天大学 Fuzzy testing method and terminal adopting region-based neuron selection strategy
CN115757103A (en) * 2022-11-03 2023-03-07 北京航空航天大学 Neural network test case generation method based on tree structure

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙浩等: "深度卷积神经网络图像识别模型对抗鲁棒性技术综述", 《雷达学报》, vol. 10, no. 4, pages 571 - 587 *

Also Published As

Publication number Publication date
CN116150038B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
CN109919108B (en) Remote sensing image rapid target detection method based on deep hash auxiliary network
CN111767405B (en) Training method, device, equipment and storage medium of text classification model
US20210012198A1 (en) Method for training deep neural network and apparatus
CN109376242B (en) Text classification method based on cyclic neural network variant and convolutional neural network
CN111259940B (en) Target detection method based on space attention map
CN106845430A (en) Pedestrian detection and tracking based on acceleration region convolutional neural networks
CN111291556B (en) Chinese entity relation extraction method based on character and word feature fusion of entity meaning item
CN110348437B (en) Target detection method based on weak supervised learning and occlusion perception
CN112541532B (en) Target detection method based on dense connection structure
CN109492596B (en) Pedestrian detection method and system based on K-means clustering and regional recommendation network
CN112256866B (en) Text fine-grained emotion analysis algorithm based on deep learning
CN110309747A (en) It is a kind of to support multiple dimensioned fast deep pedestrian detection model
CN111368634B (en) Human head detection method, system and storage medium based on neural network
CN114842208A (en) Power grid harmful bird species target detection method based on deep learning
CN104616005A (en) Domain-self-adaptive facial expression analysis method
CN111414845A (en) Method for solving polymorphic sentence video positioning task by using space-time graph reasoning network
Asri et al. A real time Malaysian sign language detection algorithm based on YOLOv3
CN115690549A (en) Target detection method for realizing multi-dimensional feature fusion based on parallel interaction architecture model
CN106204103A (en) The method of similar users found by a kind of moving advertising platform
CN111797935B (en) Semi-supervised depth network picture classification method based on group intelligence
CN116150038B (en) Neuron sensitivity-based white-box test sample generation method
CN116958740A (en) Zero sample target detection method based on semantic perception and self-adaptive contrast learning
CN116433909A (en) Similarity weighted multi-teacher network model-based semi-supervised image semantic segmentation method
CN113128479B (en) Face detection method and device for learning noise region information
CN115661542A (en) Small sample target detection method based on feature relation migration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant