CN116150038A - Neuron sensitivity-based white-box test sample generation method - Google Patents
Neuron sensitivity-based white-box test sample generation method Download PDFInfo
- Publication number
- CN116150038A CN116150038A CN202310420554.6A CN202310420554A CN116150038A CN 116150038 A CN116150038 A CN 116150038A CN 202310420554 A CN202310420554 A CN 202310420554A CN 116150038 A CN116150038 A CN 116150038A
- Authority
- CN
- China
- Prior art keywords
- neuron
- neurons
- layer
- test sample
- disturbance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 210000002569 neuron Anatomy 0.000 title claims abstract description 156
- 238000012360 testing method Methods 0.000 title claims abstract description 58
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000035945 sensitivity Effects 0.000 title claims abstract description 30
- 238000013528 artificial neural network Methods 0.000 claims abstract description 42
- 230000008859 change Effects 0.000 claims abstract description 17
- 238000005516 engineering process Methods 0.000 claims abstract description 13
- 238000012795 verification Methods 0.000 claims abstract description 10
- 210000005036 nerve Anatomy 0.000 claims abstract description 6
- 230000000644 propagated effect Effects 0.000 claims abstract description 4
- 238000012549 training Methods 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000013100 final test Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 3
- 230000006748 neuronal sensitivity Effects 0.000 claims description 2
- 238000013473 artificial intelligence Methods 0.000 abstract 1
- 230000001537 neural effect Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000013522 software testing Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000010998 test method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3684—Test management for test design, e.g. generating new test cases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3676—Test management for coverage analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/776—Validation; Performance evaluation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Hardware Design (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Complex Calculations (AREA)
Abstract
The invention discloses a method for generating a white-box test sample based on neuron sensitivity, and belongs to the field of artificial intelligence safety. Obtaining a minimum distortion disturbance of the model; selecting coverage rate indexes to update the test sample set, and generating a preliminary test sample queue; the nerve network prediction result is reversely propagated, and the importance of the nerve cells is calculated layer by layer; and selecting neurons sensitive to disturbance change in the model through a neural network verification technology, and generating a test sample set of the final model. According to the invention, the characteristic that the neuron boundaries are different from disturbance changes is utilized to extract neurons with large boundary changes in the model, the neurons are defined as neurons with sensitivity, and the coverage rate index is selected to conduct guidance, and meanwhile, fixed area disturbance is added to the picture in the sample set, so that the coverage rate of the neurons is ensured not to fall, the sensitivity of the samples to disturbance is improved, and the coverage rate of a neural network is improved.
Description
Technical Field
The invention relates to a test sample generation method, in particular to a test sample generation method based on neuron sensitivity.
Background
The deep neural network is based on data driving, the decision behavior result of the model is determined by training data to a great extent, and the deep neural network is widely used along with the deep neural network achieving good results on tasks such as image classification, target detection, image segmentation, natural language processing, recommendation systems and the like. Various software also frames a deep learning model, and people also use deep learning to replace more traditional machine learning work.
More and more studies have demonstrated that deep learning models are fragile, which makes deep neural networks unreliable for many times. For example, in an unmanned system, the model cannot accurately identify pedestrians or signal lamps, so that traffic accidents occur. Because of lack of a conducting process for deeply analyzing disturbance information in the neural network model, especially nonlinear transmission in the middle layer of the neural network, weak directional disturbance is caused to interfere with the output of the neural network model, so that sensing attack effects such as avoidance, misleading and the like are realized. In addition, because the content of the neuron semantic information in the neural network is low, the interpretability is weak, the generalization of the deep neural model is weak, and the data sample is seriously relied on. In recent years, many researchers at home and abroad take the traditional software testing technology as a heuristic, continuously perfecting a reliability testing index system and a method flow and providing a new testing framework. The researches are used for referencing the traditional software static test method, and the correctness of the internal logic of the model is revealed by analyzing the internal structure and the hierarchical relationship of the model; various test sample generation technologies are proposed, and a large number of samples with pertinence are used for exploring the behavior space of the model, so that indexes such as safety, robustness and reliability of the model are effectively evaluated.
In conventional software testing, it is important to generate test cases that can cover code and crash the program as much as possible. However, in deep neural network testing, given a test case, the model always yields a predicted result. This makes it difficult to know whether the generated test sample is good enough to cover the code. Based on the ideas covered in conventional software testing, some researchers have proposed standards for neuron coverage to cause neurons in a neural network to be activated as much as possible. In addition, other overlay standards, such as KMNC and NBC, have been proposed by some researchers. Subsequently, researchers have developed test sample generation techniques based on these criteria. However, existing approaches ignore the relationship between input perturbations and output results. Therefore, there is a need to combine the relationship of perturbation and neural network to design a sensitivity-based test sample generation method.
Disclosure of Invention
The invention solves the technical problems that: a test sample generation method for a white-box model using sensitivity importance neurons is provided.
The technical scheme is as follows: in order to solve the technical problems, the invention adopts the following technical scheme:
the method for generating the white-box test sample based on the neuron sensitivity comprises the following steps:
step 1: obtaining minimum distortion disturbance of a training sample set;
step 2: selecting coverage rate indexes to update the test sample set, and generating a preliminary test sample queue;
step 3: the nerve network prediction result is reversely propagated, and the importance of the nerve cells is calculated layer by layer;
step 4: and selecting neurons sensitive to disturbance change in the model through a neural network verification technology, and generating a test sample set of the final model.
Further, in step 1, the minimum distortion disturbance of the training sample set is obtained, by the following method:
in the process of searching the minimum distortion disturbance of the training sample set, the disturbance is added in the original training setAnd inputting the original sample into an attack network, and utilizing forward propagation of the neural network, and obtaining the upper and lower bounds of each neuron by forward propagation of the upper and lower bounds of the original sample.
Further, the upper and lower bounds of the original sample are expressed as:
wherein ,representing the upper bound of neurons,/->Representing the lower bound of neurons,/->Representing disturbance(s)>Representing the original picture.
The upper and lower bounds of each neuron are denoted as:
wherein ,is the corresponding neural network hidden layer operation, +.>Layer representing neural network, ++>Indicate->Individual neurons, ->Is standard +.>Basis vector,/->Representation->Transpose (S)>Representing the output of neurons,/->And (3) representing the calculation result of the next hidden layer aiming at the model output of the previous layer, so that the upper and lower bounds of each class are not crossed, namely the minimum distortion disturbance.
The upper and lower bounds of each neuron can be obtained by the obtained minimum distortion disturbance, and the upper bounds of each neuron form a neuron boundary setThe lower bounds of each neuron are combined into a neuron boundary set +.>,/>Is->Layer (S)>Indicate->And neurons.
Further, in step 2, the coverage rate index is selected to update the test sample set, and a preliminary test sample queue is generated, and the method is as follows:
step 2.1: selecting a coverage rate criterion as a judging basis for generating a new queue;
step 2.2: selecting a seed selection strategy for sorting seeds;
step 2.3: judging the coverage rate index, and putting the coverage rate index which is larger than the original index into a queue, and performing loop iteration to generate a preliminary test sample queue.
Further, in step 3, the neural network prediction result is back-propagated, the importance of the neurons is calculated layer by layer, and the importance calculation method of each neuron in the neural network can be calculated through the relevance score:
wherein f (x) represents the output of the model, the thLayer->Individual neurons, the correlation of which is defined by +.>A representation; />Equal to->Neurons of the layer->The sum of the correlations of all neurons related such that +.>Correlation of the nth layer->Is from the last layer of the neural network +.>Is->Counter-propagating to the first layer->All neurons in (a) including an input mapLike, the importance neurons are determined.
Further, in step 4, neurons sensitive to disturbance change in the model are selected through neural network verification technology, and a final test sample set of the model is generated, and the method is as follows:
step 4.1: the importance determined first according to step 3 is determined by correlationIndicating that the neuron contributes to the prediction result of the model when the value of the correlation is not 0, thereby defining the importance neuron set IN:
step 4.2: by adjusting minimum distortion disturbanceAccording to the conversion factor->Obtaining newly added distortion disturbance>Obtaining new neuron upper and lower bounds after obtaining newly added distortion disturbance>、/>Obtaining a boundary variation ratio with boundary variation>Along->The boundary change of each neuron is different, and it is determined that when the boundary change ratio is greater than +.>When the neuron is sensitive to the disturbance, a sensitive neuron set SN is obtained:
wherein ,is->Layer (S)>Is->A neuron; /> and />Are respectively->Layer->An original upper bound and an original lower bound of the individual neurons; />,/>Respectively by->Modulated->Layer->New upper bound and new lower bound of individual neuronsAnd (5) a boundary.
Step 4.3: obtaining the importance and sensitivity neuron sets IN and IN according to step 4.1, step 4.2And selecting sensitive neurons and importance neurons in each layer, mapping each selected neuron onto a test case, iterating the loop, terminating when the coverage rate is improved, and finally generating a final test sample set.
The beneficial effects are that: compared with the prior art, the invention has the following advantages:
(1) The invention provides a white-box test sample generation method based on neuron sensitivity. When a test sample is generated, the invention combines a correlation layer-by-layer propagation technology and a neural network verification technology to balance the sensitivity and the selection degree of the importance neurons. The correlation layer-by-layer propagation technology can calculate the influence of each neuron in the neural network on the model prediction result through the model prediction result, quantitatively calculate the correlation value of each neuron and obtain the importance of each neuron. The neural network verification technology is to calculate the prediction boundary of the neural network by adding distortion disturbance, and separate each class in the prediction result by adjusting the magnitude of the distortion disturbance to obtain the minimum distortion disturbance of the neural network.
(2) The invention provides a new coverage rate criterion-sensitivity importance neuron coverage rate criterion. Defining a new coverage rate criterion, expanding the coverage rate criterion of the neural network, and applying the new coverage rate to guide the generation of test samples.
(3) Compared with other migratable test sample generation methods, the method has the advantages that the characteristic that the neuron boundary is different from disturbance change is utilized to extract neurons with large boundary change in the model, the neurons are defined as sensitive neurons, the coverage rate index is selected for guidance, and meanwhile, fixed area disturbance is added to the picture in the sample set, so that the coverage rate of the neurons is ensured not to fall, the sensitivity of the sample to disturbance is improved, and the coverage rate of a neural network is improved. The coverage rate of the sensitive importance is higher while the coverage rate effect of the neurons is ensured.
Drawings
FIG. 1 is a flow chart of a method for generating a white-box test sample based on neuron sensitivity according to the present invention.
Detailed Description
The invention will be further illustrated with reference to specific examples, which are carried out on the basis of the technical solutions of the invention, it being understood that these examples are only intended to illustrate the invention and are not intended to limit the scope thereof.
The invention relates to a method for generating a white-box test sample based on neuron sensitivity, which comprises the steps of firstly obtaining minimum distortion disturbance of a model; selecting coverage rate indexes to update the test sample set, and generating a preliminary test sample queue; the nerve network prediction result is reversely propagated, and the importance of the nerve cells is calculated layer by layer; and selecting neurons sensitive to disturbance change in the model through a neural network verification technology, and generating a test sample set of the final model. The method specifically comprises the following steps:
step 1: obtaining minimum distortion disturbance of a training sample set; the method comprises the following steps:
in the process of searching the minimum distortion disturbance of the training sample set, the disturbance is added in the original training setAnd inputting the original sample into an attack network, and utilizing forward propagation of the neural network, and obtaining the upper and lower bounds of each neuron by forward propagation of the upper and lower bounds of the original sample.
The upper and lower bounds of the original sample are expressed as:
wherein ,representing the upper bound of neurons,/->Representing the lower bound of neurons,/->Representing disturbance(s)>Representing the original picture.
The upper and lower bounds of each neuron are denoted as:
wherein ,is the corresponding neural network hidden layer operation, +.>Layer representing neural network, ++>Indicate->Individual neurons, ->Is standard +.>Basis vector,/->Representation->Transpose (S)>Representing the output of neurons,/->And (3) representing the calculation result of the next hidden layer aiming at the model output of the previous layer, so that the upper and lower bounds of each class are not crossed, namely the minimum distortion disturbance.
The upper and lower bounds of each neuron can be obtained by the obtained minimum distortion disturbance, and the upper bounds of each neuron form a neuron boundary setThe lower bounds of each neuron are combined into a neuron boundary set +.>,/>Is->Layer (S)>Indicate->And neurons.
Step 2: selecting coverage rate indexes to update the test sample set, and generating a preliminary test sample queue; the method comprises the following steps:
step 2.1: selecting a coverage rate criterion as a judging basis for generating a new queue;
coverage criteria are specifically sensitivity importance neuron coverage criteria:
in the formula ,representing a set of neurons in a neural network, +.>Representing a function that calculates the number of neurons.
Step 2.2: a seed selection strategy is selected for seed ordering, random seed selection may be selected, or the probability of first arriving seeds being selected is greater according to the time of seed addition to the queue, and the probability of being selected is smaller as the time of seed addition to the queue is longer.
Step 2.3: judging the coverage rate index, and putting the coverage rate index which is larger than the original index into a queue, and performing loop iteration to generate a preliminary test sample queue.
Step 3: the nerve network prediction result is back-propagated, the importance of the nerve cells is calculated layer by layer, and the importance calculation method of each nerve cell in the nerve network can calculate through the relevance score:
wherein f (x) represents the output of the model, the thLayer->Individual neurons, the correlation of which is defined by +.>A representation; />Equal to->Neurons of the layer->The sum of the correlations of all neurons related such that +.>Correlation of the nth layer->Is from the neural networkThe last layer of the collaterals->Is->Counter-propagating to the first layer->The importance neurons are determined by all neurons in the system, including the input image.
Step 4: selecting neurons sensitive to disturbance change in the model through a neural network verification technology, and generating a final test sample set of the model; the method comprises the following steps:
step 4.1: the importance determined first according to step 3 is determined by correlationIndicating that the neuron contributes to the prediction result of the model when the value of the correlation is not 0, thereby defining the importance neuron set IN:
step 4.2: by adjusting minimum distortion disturbanceAccording to the conversion factor->Obtaining newly added distortion disturbance>Obtaining new neuron upper and lower bounds after obtaining newly added distortion disturbance>、/>Obtaining a boundary variation ratio with boundary variation/>Along->The boundary change of each neuron is different, and it is determined that when the boundary change ratio is greater than +.>When the neuron is sensitive to the disturbance, a sensitive neuron set SN is obtained:
wherein ,is->Layer (S)>Is->A neuron; /> and />Are respectively->Layer->An original upper bound and an original lower bound of the individual neurons; />,/>Respectively by->Modulated->Layer->A new upper bound and a new lower bound for the individual neurons.
Step 4.3: obtaining the importance and sensitivity neuron sets IN and IN according to step 4.1, step 4.2And selecting sensitive neurons and importance neurons in each layer, mapping each selected neuron onto a test case, iterating the loop, terminating when the coverage rate is increased to be greater than the original coverage rate, and finally generating a final test sample set.
The effectiveness and efficiency of the method of the invention were verified by the following experiments:
the evaluation index is neuron coverage and sensitivity importance neuron coverage.
Firstly, selecting a data set, selecting an MNIST data set, wherein the MNIST data set comprises 60000 training data and 10000 test data, and each picture is formed by 28 multiplied by 28 handwritten digital pictures of 0-9. Each picture is in the form of a black matrix, represented by 0, and a white matrix, represented by a floating point number between 0 and 1, the closer to 1, the whiter the color. The present invention then selects the MNIST model as the white box model. The comparison method is a coverage-guided deep hunter sample generation method.
TABLE 1 neuronal coverage under white box model of the invention
TABLE 2 sensitivity importance neuron coverage for different parameters under white-box model of the invention
The results in tables 1 and 2 show that the test samples generated by the method of the present invention do not show a large drop under the criteria of neuronal coverage. It can be found that the samples generated by the method provided by the invention have higher sensitivity importance neuron coverage than the samples generated by deep hunter. In general, the present invention proposes a test sample generation method based on neuronal sensitivity. When generating a new sample, more coverage of neurons in the neural network is achieved using neurons of sensitivity importance as coverage indicators.
The invention combines correlation layer-by-layer propagation technology and neural network verification to balance sensitivity and importance neuron selection degree.
Compared with other migratable test sample generation methods, the method has higher sensibility importance coverage rate while guaranteeing the neuron coverage rate effect.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.
Claims (9)
1. The method for generating the white-box test sample based on the neuron sensitivity is characterized by comprising the following steps of:
step 1: obtaining minimum distortion disturbance of a training sample set;
step 2: selecting coverage rate indexes to update the test sample set, and generating a preliminary test sample queue;
step 3: the nerve network prediction result is reversely propagated, and the importance of the nerve cells is calculated layer by layer;
step 4: and selecting neurons sensitive to disturbance change in the model through a neural network verification technology, and generating a test sample set of the final model.
2. The method for generating a white-box test sample based on neuron sensitivity according to claim 1, wherein in step 1, a minimum distortion disturbance of a training sample set is obtained, the method comprising:
in the process of searching the minimum distortion disturbance of the training sample set, the disturbance is added in the original training setAnd inputting the original sample into an attack network, and utilizing forward propagation of the neural network, and obtaining the upper and lower bounds of each neuron by forward propagation of the upper and lower bounds of the original sample.
3. The method for generating a white-box test sample based on neuron sensitivity according to claim 2, wherein upper and lower bounds of the original sample are expressed as:
wherein ,/>Representing the upper bound of neurons,/->Representing the lower bound of neurons,/->Representing disturbance(s)>Representing an original picture;
the upper and lower bounds of each neuron are denoted as:
wherein ,/>Is the corresponding neural network hidden layer operation, +.>Layer representing neural network, ++>Indicate->Individual neurons, ->Is standard +.>Basis vector,/->Representation->Transpose (S)>Representing the output of neurons,/->Representing the calculation result of the next hidden layer aiming at the output of the former layer model, so that the upper and lower boundaries of each class are the minimum distortion disturbance without crossing; the upper and lower bounds of each neuron can be obtained by the obtained minimum distortion disturbance, and the upper bounds of each neuron form a neuron boundary set +.>The lower bounds of each neuron are combined into a new set of neuron boundaries +.>,/>Is->Layer (S)>Indicate->And neurons.
4. The method for generating white-box test samples based on neuron sensitivity according to claim 1, wherein in step 2, the coverage index is selected to update the test sample set to generate a preliminary test sample queue, and the method comprises the following steps:
step 2.1: selecting a coverage rate criterion as a judging basis for generating a new queue;
step 2.2: selecting a seed selection strategy for sorting seeds;
step 2.3: judging the coverage rate index, and putting the coverage rate index which is larger than the original index into a queue, and performing loop iteration to generate a preliminary test sample queue.
5. The method for generating white-box test samples based on neuron sensitivity according to claim 1, wherein in step 3, the neural network prediction results are back-propagated, the importance of neurons is calculated layer by layer, and the importance calculation method of each neuron in the neural network can be calculated by a relevance score:
wherein f (x) represents the output of the model, the thLayer->Individual neurons, the correlation of which is defined by +.>A representation; />Equal to->Neurons of the layer->The sum of the correlations of all neurons related such that +.>Correlation of the nth layer->Is from the last layer of the neural network +.>Is->Counter-propagating to the first layer->The importance neurons are determined by all neurons in the system, including the input image.
6. The method for generating a white-box test sample based on neuron sensitivity according to claim 1, wherein in step 4, neurons sensitive to disturbance change in a model are selected by a neural network verification technique, and a final test sample set of the model is generated, by the following method:
step 4.1: first, according to the importance of the neurons determined IN step 3, an importance neuron set IN is defined:
step 4.2: by adjusting minimum distortion disturbanceAccording to the conversion factor->Obtaining a boundary change ratio through boundary change to obtain a sensitive neuron SN:
step 4.3: obtaining the importance and sensitivity neuron sets IN and IN according to step 4.1, step 4.2And selecting sensitive neurons and importance neurons in each layer, mapping each selected neuron onto a test case, iterating the loop, terminating when the coverage rate is improved, and finally generating a final test sample set.
7. The method of generating a white-box test sample based on neuronal sensitivity according to claim 2, wherein in step 4.1, the importance is determined by correlationIndicating that the neuron contributes to the prediction result of the model when the value of the correlation is not 0, thereby defining the importance neuron set IN:
8. the method of generating a white-box test sample based on neuron sensitivity according to claim 6, wherein in step 4.2, the disturbance is modified by adjusting minimum distortionAccording to the conversion factor->Obtaining newly added distortion disturbanceObtaining new neuron upper and lower bounds after obtaining newly added distortion disturbance>Obtaining a boundary variation ratio with boundary variation>Along->The boundary change of each neuron is different, and it is determined that when the boundary change ratio is greater than +.>When the neuron is sensitive to the disturbance, the sensitive neuron set SN is obtained.
9. The method of claim 8, wherein the set of sensitive neurons SN is:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310420554.6A CN116150038B (en) | 2023-04-19 | 2023-04-19 | Neuron sensitivity-based white-box test sample generation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310420554.6A CN116150038B (en) | 2023-04-19 | 2023-04-19 | Neuron sensitivity-based white-box test sample generation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116150038A true CN116150038A (en) | 2023-05-23 |
CN116150038B CN116150038B (en) | 2023-06-30 |
Family
ID=86362161
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310420554.6A Active CN116150038B (en) | 2023-04-19 | 2023-04-19 | Neuron sensitivity-based white-box test sample generation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116150038B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113986717A (en) * | 2021-09-29 | 2022-01-28 | 南京航空航天大学 | Fuzzy testing method and terminal adopting region-based neuron selection strategy |
CN115757103A (en) * | 2022-11-03 | 2023-03-07 | 北京航空航天大学 | Neural network test case generation method based on tree structure |
-
2023
- 2023-04-19 CN CN202310420554.6A patent/CN116150038B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113986717A (en) * | 2021-09-29 | 2022-01-28 | 南京航空航天大学 | Fuzzy testing method and terminal adopting region-based neuron selection strategy |
CN115757103A (en) * | 2022-11-03 | 2023-03-07 | 北京航空航天大学 | Neural network test case generation method based on tree structure |
Non-Patent Citations (1)
Title |
---|
孙浩等: "深度卷积神经网络图像识别模型对抗鲁棒性技术综述", 《雷达学报》, vol. 10, no. 4, pages 571 - 587 * |
Also Published As
Publication number | Publication date |
---|---|
CN116150038B (en) | 2023-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109919108B (en) | Remote sensing image rapid target detection method based on deep hash auxiliary network | |
CN111767405B (en) | Training method, device, equipment and storage medium of text classification model | |
US20210012198A1 (en) | Method for training deep neural network and apparatus | |
CN109376242B (en) | Text classification method based on cyclic neural network variant and convolutional neural network | |
CN111259940B (en) | Target detection method based on space attention map | |
CN106845430A (en) | Pedestrian detection and tracking based on acceleration region convolutional neural networks | |
CN111291556B (en) | Chinese entity relation extraction method based on character and word feature fusion of entity meaning item | |
CN110348437B (en) | Target detection method based on weak supervised learning and occlusion perception | |
CN112541532B (en) | Target detection method based on dense connection structure | |
CN109492596B (en) | Pedestrian detection method and system based on K-means clustering and regional recommendation network | |
CN112256866B (en) | Text fine-grained emotion analysis algorithm based on deep learning | |
CN110309747A (en) | It is a kind of to support multiple dimensioned fast deep pedestrian detection model | |
CN111368634B (en) | Human head detection method, system and storage medium based on neural network | |
CN114842208A (en) | Power grid harmful bird species target detection method based on deep learning | |
CN104616005A (en) | Domain-self-adaptive facial expression analysis method | |
CN111414845A (en) | Method for solving polymorphic sentence video positioning task by using space-time graph reasoning network | |
Asri et al. | A real time Malaysian sign language detection algorithm based on YOLOv3 | |
CN115690549A (en) | Target detection method for realizing multi-dimensional feature fusion based on parallel interaction architecture model | |
CN106204103A (en) | The method of similar users found by a kind of moving advertising platform | |
CN111797935B (en) | Semi-supervised depth network picture classification method based on group intelligence | |
CN116150038B (en) | Neuron sensitivity-based white-box test sample generation method | |
CN116958740A (en) | Zero sample target detection method based on semantic perception and self-adaptive contrast learning | |
CN116433909A (en) | Similarity weighted multi-teacher network model-based semi-supervised image semantic segmentation method | |
CN113128479B (en) | Face detection method and device for learning noise region information | |
CN115661542A (en) | Small sample target detection method based on feature relation migration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |