CN110135558B - Deep neural network test sufficiency method based on variable strength combination test - Google Patents

Deep neural network test sufficiency method based on variable strength combination test Download PDF

Info

Publication number
CN110135558B
CN110135558B CN201910323798.6A CN201910323798A CN110135558B CN 110135558 B CN110135558 B CN 110135558B CN 201910323798 A CN201910323798 A CN 201910323798A CN 110135558 B CN110135558 B CN 110135558B
Authority
CN
China
Prior art keywords
neuron
neural network
combination
test
deep neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910323798.6A
Other languages
Chinese (zh)
Other versions
CN110135558A (en
Inventor
王子元
陈炎杉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN201910323798.6A priority Critical patent/CN110135558B/en
Publication of CN110135558A publication Critical patent/CN110135558A/en
Application granted granted Critical
Publication of CN110135558B publication Critical patent/CN110135558B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a deep neural network test sufficiency method based on variable strength combination test, which utilizes a variable strength combination test technology to extract the relation of neurons in a deep neural network according to model weight, extracts neuron combinations with different strengths, and evaluates the neuron activation state coverage condition in the neural network according to the neuron activation state in the neuron combinations; and evaluating the model test sufficiency according to the calculated coverage rate. The invention not only effectively reduces the state space of the neurons, but also extracts the corresponding neuron combinations according to different action relations and calculates the coverage rate. If the test case can reach higher coverage rate, the sufficiency of the test case can be better proved, and the scientificity and the reliability of the test criterion can be improved.

Description

Deep neural network test sufficiency method based on variable strength combination test
Technical Field
The invention provides a deep neural network test sufficiency criterion based on variable strength combination test, which is used for evaluating the test sufficiency of a deep neural network. The invention relates to the technical field of deep learning test.
Background
Deep learning is a method for performing characterization learning on data in machine learning, and performs feature extraction and data conversion through cascading of multiple layers of nonlinear processing units. Deep learning has been formally proposed since 2016, enabling artificial intelligence to create a revolutionary breakthrough. In recent years, deep learning has developed rapidly and has been applied to many safety-critical fields, such as autopilot, smart medicine, and the like. But the safety accidents are frequent due to the lack of effective testing technology.
The system with deep learning as the core has the characteristics of high-dimensional input, multiple hidden layers and low-dimensional output, which causes the deep learning system to be greatly different from the traditional software system, and the traditional software testing technology cannot be applied to the deep learning. The model based on the deep neural network also has application diversity, uses the characteristics of complex scene and large data volume, and makes many challenges face when testing such systems, for example: few suitable and efficient test methods, insufficient test data, and the like.
At present, the deep learning system is difficult to achieve 100% accuracy through conventional training, and the test facing the deep neural network is still in an early stage. Since the deep neural network is different from the traditional software, the coverage criterion in the traditional software testing technology is not applied to the deep neural network model, so that the coverage criterion for evaluating the testing sufficiency is lacked for the deep neural network.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects in the prior art, the invention provides a deep neural network test sufficiency method based on variable strength combination test, which is used for more comprehensively ensuring the sufficiency of test cases and improving the reliability of deep learning test. The deep neural network is characterized by deep depth and large number of neurons. When testing a deep neural network, the state space set formed by a large number of neurons is too large to perform sufficient testing. And because the weights are different, the interaction relationship between the upper layer neuron and the lower layer neuron is also different. How to effectively reduce neuron states and how to determine variable strength relationships is a key issue in designing deep neural network test sufficiency criteria for variable strength combination tests.
The technical scheme is as follows: in order to achieve the purpose, the invention adopts the technical scheme that:
a deep neural network test sufficiency method based on variable strength combination test comprises the following steps:
step 1), extracting deep neural network model weight: in the deep neural network, after a test case is input, the neurons can propagate backwards in a real numerical value form, and model weights between the front layer neurons and the rear layer neurons are extracted.
Step 2), performing relation extraction on the neurons according to the model weight obtained in the step 1), and extracting neuron combinations with different strengths, wherein the method comprises the following steps:
step 2.1), a maximum single neuron influence combination relation extraction method: in the deep neural network, a combination of neurons in the front layer thereof which are connected with each neuron and have a weight larger than a threshold value a is selected according to each neuron.
Step 2.2), a two-dimensional reinforced combination relation extraction method: in a deep neural network, all binary neuron combinations are extracted first, and a combination with higher intensity is applied to a specific neuron, namely a neuron with the weight sum of all connected neurons in the later layer larger than a threshold b.
Step 2.3), the maximum total amount influence combination relation extraction method: in the deep neural network, all binary, ternary and quaternary neuron combinations are extracted, and the combinations with the scores larger than a threshold value c are selected according to the scores of all the combinations.
And 3) extracting the neuron state of the test case, and after the test case is input in the deep neural network, the neuron transmits the test case backwards in a real value form, wherein each neuron corresponds to a real value. And setting a threshold value of the activation function, and converting the real value into a Boolean value for judging whether to activate. The output boolean value of each neuron is obtained as the neuron output.
And 4) calculating the coverage rate of the variable-strength combination, and calculating the coverage rate of the variable-strength combination according to the neuron combination extracted in the step 2) and the neuron output acquired in the step 3). And evaluating the model test sufficiency according to the calculated coverage rate.
Preferably: method of calculating the coverage of variable intensity combinations in step 4):
and 4.1) counting the total number of the neuron combination states according to the neuron combinations extracted in the step 2).
And 4.2) counting the covered neuron combination state according to the neuron state extracted in the step 3).
Step 4.3), calculating the variable strength combination coverage rate:
Figure BDA0002035623060000021
wherein COV represents the variable intensity combined coverage, N represents the total number of neuron combined states, and N representscovRepresenting the number of neuron combination states covered.
Preferably: the scoring formula in step 2.3) is as follows:
the fraction is the sum of the weights of all neurons in the combination and their connection to obtain the posterior layer neurons/the combination strength.
Compared with the prior art, the invention has the following beneficial effects:
the variable strength combination test is applied to the deep learning test, namely, the problem of huge state space of neurons is solved, action relations among the neurons are comprehensively considered, three combination relation extraction methods are provided according to different action relations, a coverage criterion based on the variable strength combination test is provided, the sufficiency of a test case can be comprehensively ensured, and the reliability of the deep learning test is improved.
Drawings
FIG. 1: variable intensity combination test flow chart.
FIG. 2: and extracting a flow chart of the maximum single neuron influence combination relation.
FIG. 3: ) And extracting a flow chart of the two-dimensional reinforced combination relation.
FIG. 4: ) And extracting the flow chart of the maximum total quantity influence combination relation.
Detailed Description
The present invention is further illustrated by the following description in conjunction with the accompanying drawings and the specific embodiments, it is to be understood that these examples are given solely for the purpose of illustration and are not intended as a definition of the limits of the invention, since various equivalent modifications will occur to those skilled in the art upon reading the present invention and fall within the limits of the appended claims.
A method for testing sufficiency of a deep neural network based on variable strength combination testing utilizes a variable strength combination testing technology to extract relations of neurons in the deep neural network according to model weights, extracts neuron combinations with different strengths, and evaluates the coverage condition of the neuron activation states in the neural network according to the neuron activation states in the neuron combinations, as shown in FIG. 1, the method comprises the following steps:
step 1), extracting deep neural network model weight: in the deep neural network, after a test case is input, neurons can be transmitted backwards in a real numerical value form, a weight exists between a front layer neuron and a rear layer neuron and serves as a parameter influencing interaction between the neurons, and the parameter is extracted to obtain a model weight between the front layer neuron and the rear layer neuron.
Step 2), utilizing a variable strength combination testing technology, extracting the relation of the neurons according to the model weight obtained in the step 1), and extracting neuron combinations with different strengths, wherein three different relation extraction methods are provided, and the model is analyzed from different ideas, and the method comprises the following steps:
step 2.1), a maximum single neuron influence combination relation extraction method: as shown in fig. 2, in the deep neural network, a combination of neurons in the former layer connected thereto and having a weight larger than a threshold value a is selected from each neuron.
Step 2.2), a two-dimensional reinforced combination relation extraction method: as shown in fig. 3, in the deep neural network, all binary neuron combinations are extracted first, and a combination with higher intensity is applied to a specific neuron, i.e., a neuron whose sum of weights of all connected neurons in the posterior layer is greater than a threshold b.
Step 2.3), the maximum total amount influence combination relation extraction method: as shown in fig. 4, in the deep neural network, all binary, ternary, and quaternary neuron combinations are extracted, and a combination having a score larger than a threshold c is selected according to the scoring of all combinations.
The scoring formula is as follows:
the fraction is the sum of the weights of all neurons in the combination and their connection to obtain the posterior layer neurons/the combination strength.
And 3) extracting the neuron state of the test case, and after the test case is input in the deep neural network, the neuron transmits the test case backwards in a real value form, wherein each neuron corresponds to a real value. And setting a threshold value of the activation function, and converting the real value into a Boolean value for judging whether to activate. The output boolean value of each neuron is obtained as the neuron output.
And 4) calculating the coverage rate of the variable-strength combination, and calculating the coverage rate of the variable-strength combination according to the neuron combination extracted in the step 2) and the neuron output acquired in the step 3). And evaluating the model test sufficiency according to the calculated coverage rate.
Method of calculating coverage of variable intensity combinations:
and 4.1) counting the total number of the neuron combination states according to the neuron combinations extracted in the step 2).
And 4.2) counting the covered neuron combination state according to the neuron state extracted in the step 3).
Step 4.3), calculating the variable strength combination coverage rate:
Figure BDA0002035623060000041
wherein COV represents the variable intensity combined coverage, N represents the total number of neuron combined states, and N representscovRepresenting the number of neuron combination states covered.
In the deep neural network, interaction relations between neurons are different from one another, and the method for testing the adequacy of the deep neural network based on the variable strength combination test effectively reduces the state space of the neurons, extracts corresponding neuron combinations according to different interaction relations and calculates the coverage rate. If the test case can reach higher coverage rate, the sufficiency of the test case can be better proved, and the scientificity and the reliability of the test criterion can be improved.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (2)

1. A deep neural network test sufficiency method based on variable strength combination test is characterized by comprising the following steps:
step 1), extracting deep neural network model weight: in the deep neural network, after a test case is input, the neurons can be transmitted backwards in a real numerical value form, and model weights between the front layer neurons and the rear layer neurons are extracted;
step 2), performing relation extraction on the neurons according to the model weight obtained in the step 1), and extracting neuron combinations with different strengths, wherein the method comprises the following steps:
step 2.1), a maximum single neuron influence combination relation extraction method: in the deep neural network, selecting a front layer neuron combination which is connected with each neuron and has the weight larger than a threshold value a according to each neuron;
step 2.2), a two-dimensional reinforced combination relation extraction method: in the deep neural network, firstly extracting all binary neuron combinations, and applying higher-intensity combinations to specific neurons, namely the neurons with the weight sum of all connected neurons in the back layer larger than a threshold b;
step 2.3), the maximum total amount influence combination relation extraction method: in the deep neural network, extracting all binary, ternary and quaternary neuron combinations, and selecting the combinations with the scores larger than a threshold value c after scoring all the combinations;
step 3), extracting the neuron state of the test case, and after the test case is input in the deep neural network, the neuron transmits the test case backwards in a real value form, wherein each neuron corresponds to a real value; setting a threshold value of an activation function, and converting a real numerical value into a Boolean value for judging whether to activate; acquiring an output Boolean value of each neuron as neuron output;
step 4), calculating the coverage rate of the variable-strength combination, and calculating the coverage rate of the variable-strength combination according to the neuron combination extracted in the step 2) and the neuron output obtained in the step 3); evaluating model test sufficiency according to the calculated coverage rate;
method of calculating the coverage of variable intensity combinations in step 4):
step 4.1), counting the total number of neuron combination states according to the neuron combinations extracted in the step 2);
step 4.2), counting the covered neuron combination state according to the neuron state extracted in the step 3);
step 4.3), calculating the variable strength combination coverage rate:
Figure FDA0003523691900000011
wherein COV represents the variable intensity combined coverage, N represents the total number of neuron combined states, and N representscovRepresenting the number of neuron combination states covered.
2. The deep neural network test sufficiency method based on the variable strength combined test according to claim 1, wherein: the scoring formula in step 2.3) is as follows:
score is the sum of weights of all neurons in the combination and their connected posterior layer neurons/combined intensity.
CN201910323798.6A 2019-04-22 2019-04-22 Deep neural network test sufficiency method based on variable strength combination test Active CN110135558B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910323798.6A CN110135558B (en) 2019-04-22 2019-04-22 Deep neural network test sufficiency method based on variable strength combination test

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910323798.6A CN110135558B (en) 2019-04-22 2019-04-22 Deep neural network test sufficiency method based on variable strength combination test

Publications (2)

Publication Number Publication Date
CN110135558A CN110135558A (en) 2019-08-16
CN110135558B true CN110135558B (en) 2022-04-12

Family

ID=67570584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910323798.6A Active CN110135558B (en) 2019-04-22 2019-04-22 Deep neural network test sufficiency method based on variable strength combination test

Country Status (1)

Country Link
CN (1) CN110135558B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111061626B (en) * 2019-11-18 2023-11-14 北京工业大学 Test case priority ordering method based on neuron activation frequency analysis
CN111079930B (en) * 2019-12-23 2023-12-19 深圳市商汤科技有限公司 Data set quality parameter determining method and device and electronic equipment
CN112035338B (en) * 2020-07-10 2022-01-28 河海大学 Coverage rate calculation method of stateful deep neural network
CN111858341A (en) * 2020-07-23 2020-10-30 深圳慕智科技有限公司 Test data measurement method based on neuron coverage
CN114565051B (en) * 2022-03-03 2024-05-24 余姚市亿盛金属制品有限公司 Method for testing product classification model based on influence degree of neurons
CN116185843B (en) * 2023-01-16 2023-12-08 天航长鹰(江苏)科技有限公司 Two-stage neural network testing method and device based on neuron coverage rate guidance

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5130936A (en) * 1990-09-14 1992-07-14 Arinc Research Corporation Method and apparatus for diagnostic testing including a neural network for determining testing sufficiency
KR101614772B1 (en) * 2014-12-19 2016-04-22 한전원자력연료 주식회사 Method of synthesizing axial power distribution of reactor core using neural network and the In-Core Protection System(ICOPS) using the same
CN106445821A (en) * 2016-09-23 2017-02-22 郑州云海信息技术有限公司 Method for automatically generating test case based on genetic algorithm
CN108376116A (en) * 2018-01-31 2018-08-07 浙江理工大学 Based on the method for generating test case for improving particle cluster algorithm
CN108415841A (en) * 2018-03-19 2018-08-17 南京邮电大学 A kind of combined test use-case prioritization method based on covering dynamics increment
CN108876034A (en) * 2018-06-13 2018-11-23 重庆邮电大学 A kind of improved Lasso+RBF neural network ensemble prediction model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7502763B2 (en) * 2005-07-29 2009-03-10 The Florida International University Board Of Trustees Artificial neural network design and evaluation tool

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5130936A (en) * 1990-09-14 1992-07-14 Arinc Research Corporation Method and apparatus for diagnostic testing including a neural network for determining testing sufficiency
KR101614772B1 (en) * 2014-12-19 2016-04-22 한전원자력연료 주식회사 Method of synthesizing axial power distribution of reactor core using neural network and the In-Core Protection System(ICOPS) using the same
CN106445821A (en) * 2016-09-23 2017-02-22 郑州云海信息技术有限公司 Method for automatically generating test case based on genetic algorithm
CN108376116A (en) * 2018-01-31 2018-08-07 浙江理工大学 Based on the method for generating test case for improving particle cluster algorithm
CN108415841A (en) * 2018-03-19 2018-08-17 南京邮电大学 A kind of combined test use-case prioritization method based on covering dynamics increment
CN108876034A (en) * 2018-06-13 2018-11-23 重庆邮电大学 A kind of improved Lasso+RBF neural network ensemble prediction model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Feature-Guided Black-Box Safety Testing of Deep Neural Networks;Wicker M et al;《Proceedings of the International Conference on Tools and Algorithms for the Construction and Analysis of Systems》;20181231;全文 *
Rapid evaluation of poultry manure content using artificial neural networks method;Longjian Chen et al;《Biosystems Enginnering》;20081231;全文 *
基于one-test-at-a-time策略的可变力度组合测试用例生成方法;王子元等;《计算机学报》;20121231;第35卷(第12期);全文 *

Also Published As

Publication number Publication date
CN110135558A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
CN110135558B (en) Deep neural network test sufficiency method based on variable strength combination test
CN105046277B (en) Robust mechanism study method of the feature significance in image quality evaluation
CN106096535B (en) Face verification method based on bilinear joint CNN
CN112149316A (en) Aero-engine residual life prediction method based on improved CNN model
CN109145939A (en) A kind of binary channels convolutional neural networks semantic segmentation method of Small object sensitivity
CN108021933A (en) Neural network recognization model and recognition methods
CN111507884A (en) Self-adaptive image steganalysis method and system based on deep convolutional neural network
CN102201236A (en) Speaker recognition method combining Gaussian mixture model and quantum neural network
CN107077734A (en) Determining method and program
CN111160452A (en) Multi-modal network rumor detection method based on pre-training language model
CN103136540B (en) A kind of Activity recognition method based on implicit structure reasoning
CN106023154A (en) Multi-temporal SAR image change detection method based on dual-channel convolutional neural network (CNN)
CN101976313A (en) Frequent subgraph mining based abnormal intrusion detection method
CN103440471B (en) The Human bodys' response method represented based on low-rank
CN110458003B (en) Facial expression action unit countermeasure synthesis method based on local attention model
CN104657466A (en) Method and device for identifying user interest based on forum post features
CN109255339B (en) Classification method based on self-adaptive deep forest human gait energy map
CN110263164A (en) A kind of Sentiment orientation analysis method based on Model Fusion
CN106997379A (en) A kind of merging method of the close text based on picture text click volume
CN103778913A (en) Pathologic voice recognizing method
CN110781760B (en) Facial expression recognition method and device based on space attention
CN110458215A (en) Pedestrian's attribute recognition approach based on multi-time Scales attention model
CN111737688B (en) Attack defense system based on user portrait
CN110347579B (en) Deep learning test case selection method based on neuron output behavior pattern
CN102955948B (en) A kind of distributed mode recognition methods based on multiple agent

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant