CN116303088A - Test case ordering method based on deep neural network cross entropy loss - Google Patents
Test case ordering method based on deep neural network cross entropy loss Download PDFInfo
- Publication number
- CN116303088A CN116303088A CN202310401942.XA CN202310401942A CN116303088A CN 116303088 A CN116303088 A CN 116303088A CN 202310401942 A CN202310401942 A CN 202310401942A CN 116303088 A CN116303088 A CN 116303088A
- Authority
- CN
- China
- Prior art keywords
- cross entropy
- entropy loss
- test case
- neural network
- deep neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3684—Test management for test design, e.g. generating new test cases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Computer Hardware Design (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention relates to a test case ordering method based on deep neural network cross entropy loss. According to the method, a test case cross entropy loss quantization model is constructed, a training set is utilized to train and classify a deep neural network, and a DNN model with N classification is obtained; inputting the test cases into the trained N-class DNN model to obtain a feature set of the test cases in the prediction result; and inputting the feature set into a test case cross entropy loss quantization model to obtain corresponding cross entropy loss values, and sequencing the cross entropy loss values to obtain a sequenced test case set. Compared with the prior art, the invention has the remarkable advantages that: through analyzing the training process of the deep neural network, the characteristics of the test cases which reveal DNN defects are found, namely, the larger the cross entropy loss value of one test case relative to DNN is, the more likely the defects of DNN are revealed, and through retraining, the robustness of a DNN model is improved.
Description
Technical Field
The invention relates to the technical field of software engineering, in particular to a test case ordering method based on deep neural network cross entropy loss.
Background
Deep neural networks (Deep Neural Networks, DNN for short) have made breakthrough progress in the fields of image recognition, speech recognition, natural language processing, etc., and have been widely deployed into software systems to help solve various tasks, such as an autopilot system, a medical diagnostic system, etc. However, neural networks are susceptible to interference and thus make false decisions that can lead to life or property losses, such as fatal collisions of google autopilot cars. Therefore, it is extremely important to ensure the reliability and robustness of the software system driven by DNN.
DNN testing is one of the most effective methods to ensure its quality. However, unlike traditional software testing, since DNN is a programming paradigm based on data driving, internal logic is formed through multiple rounds of training using massive data, which makes DNN models have the characteristics of poor interpretability, low generalization capability, and the like. Therefore, many conventional software testing methods cannot be directly applied to DNN testing. In practice, to adequately test a DNN model, a tester typically requires a large amount of marking data to test and optimize the DNN model. However, marking test cases to verify the correctness of the model output is very expensive, specific reasons include: firstly, the test input scale required to be marked is large; secondly, the marking efficiency is low mainly by manual marking; finally, labeling data in a specific field (e.g., medical field, financial field) requires talents with expertise in that field.
In order to alleviate the problem of marking cost, one possible solution is to sort unmarked test cases, assign higher priority to test cases that may cause a misprediction of DNN, and we can only mark test cases with higher priority after sorting, so that not only can the marking cost be saved, but also the efficiency of DNN testing can be improved.
Document 1: the Chinese patent No. 202111140405.1 discloses a robust image classification model training method based on a small number of neuron connections, which adopts a backbone network to extract features of an input image and extracts potential features of the input image at a last convolution layer and a global average pooling layer, but the method has the advantages that trainable parameters of the model are not increased, a robust model is obtained under the condition that resistance training is not used, and the problem of low test precision caused by large parameter quantity scale cannot be solved.
Disclosure of Invention
The invention aims to provide a test case sorting method based on deep neural network cross entropy loss, which comprises the steps of constructing a test case cross entropy loss quantization model, precisely quantizing a cross entropy loss value of a test case relative to DNN, marking the test case with higher priority as the cross entropy loss value is larger, and selecting the test case with higher marking priority for model retraining by taking the test case as a sorting index.
The technical solution for realizing the purpose of the invention is as follows:
a test case ordering method based on deep neural network cross entropy loss comprises the following steps:
step 1: and constructing a test case cross entropy loss quantization model.
Step 2: the method comprises the steps of obtaining unlabeled test cases, inputting the test cases into a trained N-class DNN model to obtain a predicted result of the DNN model, wherein the predicted result comprises N class sets, the classes of the test cases in each set are the same, and extracting features from activation tracks of neurons in a last hidden layer of the DNN model according to the test cases to obtain feature sets.
Step 3: and inputting the feature set into a test case cross entropy loss quantization model to obtain a cross entropy loss value of the test case in each class set relative to the DNN model.
Step 4: and sequencing the cross entropy loss values of the test cases to obtain a sequenced test case set.
Further, the test case cross entropy loss quantization model is constructed by using an optimized distributed gradient lifting library, cross entropy loss of the test case relative to DNN is used as a label, and the test case cross entropy loss quantization model is trained by combining with extracting a feature set of a training set.
Further, the test case cross entropy loss quantization model is constructed by the following steps:
s11, feature extraction:
let the training set of the deep neural network be X= { X 1 ,x 2 ,x 3 ,...,x M M represents the size of the training set;
let l= { e 1 ,e 2 ,e 3 ,...,e m The last hidden layer of the deep neural network, where e i (i∈[1,m]) Represents the neurons in L, and m represents the number of neurons in L;
let alpha e (x) Representing the output value of sample x on neuron e, where α L (x) Representing the set of output values of sample x on L, representing the activation trajectory of sample x on L, alpha L (X)={α L (x) The I X epsilon X represents the activation track of all samples of the training set on the L;
statistics of each neuron e in L i Range of activation values on training set X Low i ,high i ]Dividing the range of the activation value into k sections
For any one sample x, the range of corresponding activation values isSample x is at neuron e i The extracted characteristic value is f x (e i ) For each sample x, an m-dimensional eigenvector F (x) = { F is extracted x (e 1 ),f x (e 2 ),f x (e 3 ),...,f x (e m ) The feature extracted by the training set is F (X) = { F (X) |x e X };
s12, extracting labels:
after the training set X is input into the deep neural network, the deep neural networkAfter the output result of the complex is processed by the activation function, a predicted probability set P (X) = { P '(X) |x ε X }, where P' (X) = { P } is obtained 1 (x),p 2 (x),p 3 (x),...,p N (x) N represents the number of categories;
calculating a cross entropy Loss value Loss (X) = { Loss (X) |x epsilon X } of each sample relative to the deep neural network according to a cross entropy Loss formula as a label, wherein the specific calculation formula is as follows:
wherein: CEloss (x) represents the cross entropy loss value of each sample, loss (x) represents the loss value of each sample, N represents the number of kinds of data, p n (x) Sample x is the true probability of the nth class, p' n (x) Sample x is predicted as a probability of the nth class for DNN.
S13, training a test case cross entropy Loss quantization model by using the extracted feature F (X) = { F (X) |x epsilon X } and the extracted label Loss (X) = { Loss (X) |x epsilon X }.
Further, extracting features from activation tracks of neurons in a last hidden layer of the deep neural network according to the test case includes:
dividing the prediction result of DNN model into N sets C= { C 1 ,C 2 ,C 3 ,...,C N Each set C i (i∈[1,N]) The test cases in (1) are predicted to be in the same category by DNN;
obtaining feature setsWherein each set C i The extracted characteristic is-> Representing set C i Features of the j-th test case in (2), n representing set C i The number of test cases.
Further, feature set F C Inputting the cross entropy loss quantization model of the test cases to obtain cross entropy loss values of the test cases in each class set relative to the DNN model Represent C i Cross entropy loss set for all test cases in the test system.
Further, for cross entropy loss setsThe cross entropy loss values of the test cases are subjected to descending order and are sorted to obtain a sorted test case set +.>Wherein->Representation pair C i Results of all test cases after sequencing.
An apparatus for a deep neural network cross entropy loss based test case ordering method includes a memory for storing a computer program and a processor for implementing the steps of a deep neural network cross entropy loss based test case ordering method when the computer program is executed.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of a test case ordering method based on deep neural network cross entropy loss.
Compared with the prior art, the invention has the remarkable advantages that: according to the method, through analyzing the training process of the deep neural network, the characteristics of the test cases which reveal DNN defects are found, namely, the larger the cross entropy loss value of one test case relative to DNN is, the more likely the defects of DNN are revealed, and through retraining, the more help for improving DNN robustness is provided.
The following three aspects are specifically shown:
1. and constructing a test case cross entropy loss quantization model, and realizing cross entropy loss of the accurate quantized test case relative to the deep neural network.
2. The cross entropy loss of the test cases relative to the DNN model is used as a sequencing index, higher priority is given to the test cases with high cross entropy loss values, the defect detection rate of the DNN model is improved, meanwhile, only the test cases with high priority are marked, the marking cost is saved, and the testing efficiency of the DNN model is improved.
3. And arranging the test cases with strong defect disclosure capability at the front position, and then retraining the DNN model by marking the test cases and adding the test cases into the original training set, so that the robustness of the DNN model is remarkably improved.
Drawings
FIG. 1 is a schematic diagram of a test case ordering method based on deep neural network cross entropy loss according to the present invention.
FIG. 2 is a schematic diagram of a comparison of test case cross entropy loss quantized data with actual data in an embodiment of the invention, where (a) is VGG-16 and (b) is ResNet-20.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
1-2, a test case ordering method based on deep neural network cross entropy loss comprises the following steps:
step 1: constructing a test case cross entropy loss quantization model;
step 2: processing unlabeled test cases, inputting the unlabeled test cases into a trained N-class DNN model, and dividing the unlabeled test cases into N sets C= { C according to the DNN prediction results 1 ,C 2 ,C 3 ,...,C N Each set C i (i∈[1,N]) The test cases in the model are predicted to be of the same category by the DNN model, and the characteristics are extracted according to the activation track of the neurons in the last hidden layer of the DNN of the test cases to obtainTo feature setsWherein for each set C i The extracted characteristic is->Representing set C i Features of the j-th test case in (2), n representing set C i The number of test cases.
Step 3: quantifying unlabeled test case cross entropy loss, and collecting the feature set F extracted in the step 2 C Inputting the cross entropy loss quantization model of the test cases constructed in the step 1 to obtain the cross entropy loss value of the test cases in each class set relative to the DNN model Represent C i Cross entropy loss set for all test cases in the test system.
Step 4: sequencing test cases. For C i (i∈[1,N]) The corresponding cross entropy loss set obtained in the step 3 is collectedThe cross entropy loss values of the test cases are sorted in descending order, and a sorted test case set can be obtainedWherein->Representation pair C i Results of all test cases after sequencing.
Specifically, in step 1, the implementation step of constructing the test case cross entropy loss quantization model includes:
s11, extracting features. Let the training set of the deep neural network be X= { X 1 ,x 2 ,x 3 ,...,x M M represents the size of the training set. Let l= { e 1 ,e 2 ,e 3 ,...,e m The last hidden layer of the deep neural network, e i (i∈[1,m]) Represents the neurons in L, and m represents the number of neurons in L. Let alpha e (x) Representing the output value of sample x on neuron e, alpha L (x) Representing the set of output values of sample x on L, called the Activation Trajectory (AT) of sample x on L, so α L (X)={α L (x) The X e X represents the activation trajectory of all samples of the training set on L. We count each neuron e in L i Range of activation values on training set X Low i ,high i ]Then equally divide it into k intervals
For any sample x, ifSample x is at neuron e i The extracted characteristic value is f x (e i ) For each sample x, extract a feature vector of dimension m
F(x)={f x (e 1 ),f x (e 2 ),f x (e 3 ),...,f x (e m ) Therefore, the feature extracted by the training set is F (X) = { F (X) |x e X }.
S12, extracting labels. After the training set X is input into DNN, the output result of DNN is processed by a softmax activation function, and then a probability set P (X) = { P '(X) |x epsilon X } of a predicted result, P' (X) = { P }, is obtained 1 (x),p 2 (x),p 3 (x),...,p N (x) N represents the number of categories.
Cross entropy is a commonly used loss function, and in deep learning, the cross entropy loss function is used as an optimization target of the deep neural network model and is used for measuring the difference between the prediction result of the deep neural network model and the label.
And calculating a cross entropy Loss value Loss (X) = { Loss (X) |x epsilon X } of each sample relative to the DNN model as a label. The specific calculation is as follows:
wherein: CEloss (x) represents the cross entropy loss value of each sample, loss (x) represents the loss value of each sample, N represents the number of kinds of data, p n (x) Sample x is the true probability of the nth class, p' n (x)∈[0,1]Sample x is predicted as a probability of the nth class for DNN. log represents natural logarithm, and p 'is calculated' n (x) The natural logarithm of (c) is to eliminate the prediction result error of the deep neural network model for the class with lower probability.
The cross entropy loss function is mainly characterized by comprising the following steps:
(1) Non-negativity: because both the true label and the prediction probability are between 0,1, the cross entropy loss function is a positive real number in its domain.
(2) Approaching zero: the more accurate the model prediction, the smaller the loss value, and the cross entropy loss function is zero when the prediction probability of the model completely matches the real label. This is because log (1) =0, when p (x) =p '(x), the loss function is logp' n (x) The portion is zero.
(3) The penalty for erroneous predictions is large. When the prediction probability of the model deviates from the true label, the value of the cross entropy loss function increases, punishing the prediction error of the model.
(4) The penalty is greater for categories with lower prediction probabilities. This is because of the lovp' n (x) At p' n (x) Near 0 increases relatively fast, causing the model to incur a greater penalty on the categories with lower prediction probabilities.
The cross entropy loss function has the characteristic of larger class penalty with lower error prediction and probability in the classification problem. In the model training process, the model can continuously adjust parameters by minimizing the cross entropy loss function so as to improve the prediction accuracy of the real label. This makes the cross entropy loss function a common optimization objective in deep learning for model training to supervise classification problems in learning.
S13, training a cross entropy loss quantization model. The cross entropy Loss quantization model is trained using the feature F (X) = { F (X) |x e X } extracted in S11 and the label Loss (X) = { Loss (X) |x e X } extracted in S12.
Specifically, a cross entropy loss quantization model is constructed using an optimized distributed gradient boost library eXtreme Gradient Boosting (XGBoost), which is a model with massive parallel computing power and good portability, which can learn more complex features from basic features efficiently, setting the important super-parameters in XGBoost to be max_depth=9, learning_rate=0.05, and n_optimators=300 of the tree.
In addition, the deep neural network model adopts a structure of a convolutional neural network, and the convolutional neural network consists of an input layer, a plurality of hidden layers and an output layer, wherein the hidden layers mainly comprise a convolutional layer, a pooling layer and a full-connection layer. The convolution layer is used for extracting the characteristics of the input data, and different convolution kernels extract different characteristics; periodically inserting pooling layers between the continuous convolution layers, so as to reduce the number of parameters in the convolution neural network and effectively prevent the convolution neural network from being over fitted; the fully connected layer is used to map the learned feature representation to the label space of the sample and then to the output layer. The use of activation functions such as (ReLU and Sigmoid) after the convolutional layer and the fully-connected layer typically gives convolutional neural networks more expressive power.
Each layer of the convolutional neural network comprises a plurality of neurons, the neurons of adjacent layers are connected through edges with weights, the weights of the connected edges are obtained after training set data are used for training through a plurality of rounds, in the training process, the output value of each neuron is obtained after the sum of the products of the neuron output value and the weights connected with the previous layer is subjected to an activation function, when the value is larger than a set threshold value, the neurons are activated, and after the convolutional neural network is trained, parameters are kept unchanged in the testing process.
Overall, convolutional neural networks are used to map input data x onto output results y. For example, an N-classification task gives an input x, and after processing the neurons in the convolutional neural network, an N-dimensional vector v= { V is obtained at the output layer 1 ,v 2 ,v 3 ,...,v N Normalized using a softmax function, a set of probability vectors p= { P is obtained 1 ,p 2 ,p 3 ,...,p N },p i The probability that the test case is predicted as the ith class by the convolutional neural network is represented, and the final prediction result of the deep neural network model is the class corresponding to the value with the maximum probability in P.
Specific examples are as follows:
as shown in table 1, a test case cross entropy loss quantization model was constructed. The SVHN dataset is selected as the subject of the evaluation. SVHN is a house number dataset in google street view image, consisting of 10 category house numbers from 0 to 9, containing 73257 training data and 26032 test cases. Each picture is a 32 x 32 color image. Two models were trained using the SVHN dataset: VGG-16 and ResNet-20, wherein VGG-16 has 21 layers in total, comprising 7274 neurons; resNet-20 has a total of 20 layers, including 698 neurons. For each data set and model combination, firstly, extracting the characteristics and the labels of the training set, and respectively constructing a cross entropy loss quantization model, namely model1 and model2 by using SGboost.
Table 1 dataset and DNN model
Processing unlabeled test cases, inputting test set data of SVHN into a VGG-16 model and a ResNet-20 model after training, and dividing the test set data into 10 sets C= { C according to a DNN prediction result 1 ,C 2 ,C 3 ,...,C 10 Each set C i (i∈[1,10]) The test cases in (1) are predicted to be of the same category by DNN, then the characteristics are extracted according to the activation tracks of the neurons in the last hidden layer of VGG-16 and ResNet-20 models,obtaining feature setsWherein for each set C i The extracted characteristic is->Representing set C i Features of the j-th test case in (2), n representing set C i The number of test cases.
Unlabeled test case cross entropy loss is quantified. To extract feature set F C Inputting the model1 and model2 into the constructed test case cross entropy loss quantization model to obtain cross entropy loss value of test case relative to DNN model in each class set Represent C i Cross entropy loss set for all test cases in the test system. 200 test cases are randomly selected from the test set of the SVHN to show the quantization effect of the cross entropy loss model, and the true cross entropy loss value of the test cases relative to DNN and the cross entropy loss value quantized by using the method are plotted.
As shown in (a) and (b) of fig. 2, the abscissa represents the number of test cases, the ordinate represents the cross entropy loss value of the test cases, the solid line with triangles represents the true cross entropy loss value, and the broken line with pentagram represents the cross entropy loss value quantized by the method of the present invention. The results show that the cross entropy loss value of the model quantization trained is substantially identical to the true value.
For C i (i∈[1,10]) The obtained cross entropy loss setThe cross entropy loss values of the test cases are ordered in descending order to obtain an ordered test case set +.>Wherein->Representation pair C i Results of all test cases after sequencing.
In addition, the invention also provides equipment of the test case ordering method based on the deep neural network cross entropy loss, which comprises a memory for storing a computer program and a processor for realizing the steps of the test case ordering method based on the deep neural network cross entropy loss when executing the computer program.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of a test case ordering method based on deep neural network cross entropy loss.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of a computer program, which may be stored on a non-transitory computer readable storage medium and which, when executed, may comprise the steps of the above-described embodiments of the methods. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.
Claims (8)
1. A test case ordering method based on deep neural network cross entropy loss is characterized by comprising the following steps: the method comprises the following steps:
step 1: constructing a test case cross entropy loss quantization model;
step 2: the method comprises the steps of obtaining unlabeled test cases, inputting the test cases into a trained N-class DNN model to obtain a predicted result of the DNN model, wherein the predicted result comprises N class sets, the classes of the test cases in each set are the same, and extracting features from activation tracks of neurons in a last hidden layer of the DNN model according to the test cases to obtain feature sets;
step 3: inputting the feature set into a test case cross entropy loss quantization model to obtain a cross entropy loss value of the test case in each class set relative to a DNN model;
step 4: and sequencing the cross entropy loss values of the test cases to obtain a sequenced test case set.
2. The test case ordering method based on deep neural network cross entropy loss according to claim 1, wherein the method comprises the following steps: the test case cross entropy loss quantization model is constructed by using an optimized distributed gradient lifting library, cross entropy loss of the test case relative to DNN is used as a label, and the test case cross entropy loss quantization model is trained by combining with extracting a feature set of a training set.
3. The test case ordering method based on deep neural network cross entropy loss according to claim 2, wherein the method is characterized by comprising the following steps: the test case cross entropy loss quantization model is constructed by the following steps:
s11, feature extraction:
let the training set of the deep neural network be X= { X 1 ,x 2 ,x 3 ,...,x M M represents the size of the training set;
let l= { e 1 ,e 2 ,e 3 ,...,e m The last hidden layer of the deep neural network, where e i (i∈[1,m]) Represents the neurons in L, and m represents the number of neurons in L;
let alpha e (x) Representing the output value of sample x on neuron e, where α L (x) Representing the set of output values of sample x on L, representing the activation trajectory of sample x on L, alpha L (X)={α L (x) The I X epsilon X represents the activation track of all samples of the training set on the L;
statistics of each neuron e in L i Range of activation values on training set X Low i ,high i ]Dividing the range of activation values into k intervals alpha ei (X)={u 1 ,u 2 ,u 3 ,...,u k };
For any one sample x, the range of corresponding activation values isSample x is at neuron e i The extracted characteristic value is f x (e i ) For each sample x, an m-dimensional eigenvector F (x) = { F is extracted x (e 1 ),f x (e 2 ),f x (e 3 ),...,f x (e m ) The feature extracted by the training set is F (X) = { F (X) |x e X };
s12, extracting labels:
after the training set X is input into the deep neural network, the output result of the deep neural network is processed by an activation function to obtain a predicted probability set P (X) = { P '(X) |x e X }, wherein P' (X) = { P = { 1 (x),p 2 (x),p 3 (x),...,p N (x) N represents a classA different number;
calculating a cross entropy Loss value Loss (X) = { Loss (X) |x epsilon X } of each sample relative to the deep neural network according to a cross entropy Loss formula as a label, wherein the specific calculation formula is as follows:
wherein: CEloss (x) represents the cross entropy loss value of each sample, loss (x) represents the loss value of each sample, N represents the number of kinds of data, p n (x) Sample x is the true probability of the nth class, p' n (x) Sample x is predicted as a probability of the nth class for DNN.
S13, training a test case cross entropy Loss quantization model by using the extracted feature F (X) = { F (X) |x epsilon X } and the extracted label Loss (X) = { Loss (X) |x epsilon X }.
4. The test case ordering method based on deep neural network cross entropy loss according to claim 3, wherein: extracting characteristics of activation tracks of neurons in the last hidden layer of the deep neural network according to the test case, wherein the characteristics comprise:
dividing the prediction result of the DNN model into N sets C= { C 1 ,C 2 ,C 3 ,...,C N Each set C i (i∈[1,N]) The test cases in (1) are predicted to be in the same category by DNN;
5. The test case ordering method based on deep neural network cross entropy loss according to claim 4, wherein the method comprises the following steps: assembling the features F C Inputting the cross entropy loss quantization model of the test cases to obtain cross entropy loss values of the test cases in each class set relative to the DNN modelRepresent C i Cross entropy loss set for all test cases in the test system.
6. The test case ordering method based on deep neural network cross entropy loss according to claim 5, wherein the method comprises the following steps: for the cross entropy loss set L Ci The cross entropy loss values of the test cases are subjected to descending order and sorting, and a sorted test case set is obtainedWherein->Representation pair C i Results of all test cases after sequencing.
7. The equipment of the test case ordering method based on the deep neural network cross entropy loss is characterized in that: comprising the following steps:
a memory for storing a computer program;
a processor for implementing the steps of a test case ordering method based on deep neural network cross entropy loss as claimed in any one of claims 1 to 6 when executing the computer program.
8. A computer-readable storage medium, characterized by: the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of a test case ordering method based on deep neural network cross entropy loss as claimed in any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310401942.XA CN116303088A (en) | 2023-04-17 | 2023-04-17 | Test case ordering method based on deep neural network cross entropy loss |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310401942.XA CN116303088A (en) | 2023-04-17 | 2023-04-17 | Test case ordering method based on deep neural network cross entropy loss |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116303088A true CN116303088A (en) | 2023-06-23 |
Family
ID=86822410
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310401942.XA Pending CN116303088A (en) | 2023-04-17 | 2023-04-17 | Test case ordering method based on deep neural network cross entropy loss |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116303088A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111444076A (en) * | 2018-12-29 | 2020-07-24 | 北京奇虎科技有限公司 | Method and device for recommending test case steps based on machine learning model |
CN111737110A (en) * | 2020-05-21 | 2020-10-02 | 天津大学 | Test input selection method for deep learning model |
CN114327594A (en) * | 2021-12-24 | 2022-04-12 | 上海天玑科技股份有限公司 | Test case selection method, device and medium applied to distributed storage system |
CN114741310A (en) * | 2022-04-26 | 2022-07-12 | 河海大学 | Transferable image confrontation sample generation and deep neural network testing method and system |
WO2023273449A1 (en) * | 2021-06-29 | 2023-01-05 | 中国电子技术标准化研究院 | Method and apparatus for generating test case based on generative adversarial network |
CN115858388A (en) * | 2022-12-28 | 2023-03-28 | 浙江工业大学 | Test case priority ordering method and device based on variation model mapping chart |
-
2023
- 2023-04-17 CN CN202310401942.XA patent/CN116303088A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111444076A (en) * | 2018-12-29 | 2020-07-24 | 北京奇虎科技有限公司 | Method and device for recommending test case steps based on machine learning model |
CN111737110A (en) * | 2020-05-21 | 2020-10-02 | 天津大学 | Test input selection method for deep learning model |
WO2023273449A1 (en) * | 2021-06-29 | 2023-01-05 | 中国电子技术标准化研究院 | Method and apparatus for generating test case based on generative adversarial network |
CN114327594A (en) * | 2021-12-24 | 2022-04-12 | 上海天玑科技股份有限公司 | Test case selection method, device and medium applied to distributed storage system |
CN114741310A (en) * | 2022-04-26 | 2022-07-12 | 河海大学 | Transferable image confrontation sample generation and deep neural network testing method and system |
CN115858388A (en) * | 2022-12-28 | 2023-03-28 | 浙江工业大学 | Test case priority ordering method and device based on variation model mapping chart |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113221905B (en) | Semantic segmentation unsupervised domain adaptation method, device and system based on uniform clustering and storage medium | |
US11107250B2 (en) | Computer architecture for artificial image generation using auto-encoder | |
CN108182259B (en) | Method for classifying multivariate time series based on deep long-short term memory neural network | |
US11585918B2 (en) | Generative adversarial network-based target identification | |
CN110866530A (en) | Character image recognition method and device and electronic equipment | |
CN113259331B (en) | Unknown abnormal flow online detection method and system based on incremental learning | |
US11068747B2 (en) | Computer architecture for object detection using point-wise labels | |
CN111931505A (en) | Cross-language entity alignment method based on subgraph embedding | |
CN113326390B (en) | Image retrieval method based on depth feature consistent Hash algorithm | |
US20200134429A1 (en) | Computer architecture for multiplier-less machine learning | |
CN114358188A (en) | Feature extraction model processing method, feature extraction model processing device, sample retrieval method, sample retrieval device and computer equipment | |
CN115471739A (en) | Cross-domain remote sensing scene classification and retrieval method based on self-supervision contrast learning | |
US11195053B2 (en) | Computer architecture for artificial image generation | |
CN114528835A (en) | Semi-supervised specialized term extraction method, medium and equipment based on interval discrimination | |
CN112132257A (en) | Neural network model training method based on pyramid pooling and long-term memory structure | |
CN115392357A (en) | Classification model training and labeled data sample spot inspection method, medium and electronic equipment | |
CN113095229B (en) | Self-adaptive pedestrian re-identification system and method for unsupervised domain | |
CN109101984B (en) | Image identification method and device based on convolutional neural network | |
Gao et al. | An improved XGBoost based on weighted column subsampling for object classification | |
CN111783688A (en) | Remote sensing image scene classification method based on convolutional neural network | |
CN110675382A (en) | Aluminum electrolysis superheat degree identification method based on CNN-LapseLM | |
US20220269991A1 (en) | Evaluating reliability of artificial intelligence | |
CN115796635A (en) | Bank digital transformation maturity evaluation system based on big data and machine learning | |
CN116303088A (en) | Test case ordering method based on deep neural network cross entropy loss | |
CN114443840A (en) | Text classification method, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |