CN115629998A - Test case screening method based on KMeans clustering and similarity - Google Patents

Test case screening method based on KMeans clustering and similarity Download PDF

Info

Publication number
CN115629998A
CN115629998A CN202211652532.4A CN202211652532A CN115629998A CN 115629998 A CN115629998 A CN 115629998A CN 202211652532 A CN202211652532 A CN 202211652532A CN 115629998 A CN115629998 A CN 115629998A
Authority
CN
China
Prior art keywords
test
test cases
similarity
cases
clustering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211652532.4A
Other languages
Chinese (zh)
Other versions
CN115629998B (en
Inventor
王世海
安东
刘斌
路云峰
杨勋利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202211652532.4A priority Critical patent/CN115629998B/en
Publication of CN115629998A publication Critical patent/CN115629998A/en
Application granted granted Critical
Publication of CN115629998B publication Critical patent/CN115629998B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Test And Diagnosis Of Digital Computers (AREA)

Abstract

The invention belongs to the technical field of software fault positioning, and particularly discloses a test case screening method based on KMeans clustering and similarity, which comprises the steps of representing all test cases as a set T, carrying out Kmeans clustering on all the test cases and forming k clusters; randomly selecting a test case in each cluster to form a set K, acquiring a test prediction of the set K, judging an execution result, and forming a set F by using the test case with the result of failure; screening similar test cases for the failed test cases in the set F to form a set J, acquiring a test prediction of the set J and judging an execution result; and merging the test cases screened according to the similarity with the set K, and using the known prediction test cases as input information for a subsequent training classifier. The method adopts a similarity method, can more obviously distinguish the sentences containing the faults, can improve the fault positioning efficiency, and is well combined with other methods using unknown predicted test cases.

Description

Test case screening method based on KMeans clustering and similarity
Technical Field
The invention relates to the technical field of software fault location, in particular to a test case screening method based on KMeans clustering and similarity, and particularly relates to a test case screening method used in software fault location and used for improving fault location efficiency by using test cases with unknown test predictions.
Background
In software testing, a test prediction is defined as a source for determining an expected result to be compared with an actual result during testing, that is, a test result reference, and generally the test prediction needs to be obtained by fully understanding the working principle of a program to be tested and combining test case input. In the current software fault location, the perfectness of a test case in the current practical application is difficult to meet the requirements of the location technology by the widely adopted fault location technology based on program spectrum: the existing fault location technology makes certain assumptions on test predictions, and both the general software fault location technology and the software multi-fault location technology assume that enough success/failure cases are collected and obtain ideal test predictions, i.e. the execution result of each test case is definitely separable, so that whether the test case is successfully executed can be definitely judged. However, in actual software engineering practice, such ideal test predictions are often difficult to obtain. On one hand, on the one hand, a debugger can manually check the observed case output and the test assertion in the test specification to determine whether the test case passes, but the manual debugging process is very time-consuming, especially under the condition that the tester cannot completely understand the tested system; on the other hand, the execution result of each use case is not labeled, for example, when the program crashes or the program state is unknown, the execution result of the use case cannot be obtained in this case, and even a large number of test cases with unknown predictions may be obtained in the test activity.
CN105653444B discloses a software defect fault identification method and system based on internet log data, which takes the internet source system log data as a training set and extracts features from the training set, and generates a software defect fault log identification prediction model through machine learning or similarity matching; and analyzing and identifying the log segments representing the software defect fault aiming at the source log data of the user system, thereby obtaining the software defect fault type aiming at the user system log.
CN105893256B discloses a software fault location method based on a machine learning algorithm, which comprises the steps of firstly, describing possible fault distribution in a real program by utilizing Gaussian mixed distribution; then, removing the redundant test samples by using a clustering analysis method based on a Gaussian mixture model to find a special test set aiming at a specific fault; and modifying the support vector machine model to adapt to unbalanced data samples, and finding out a nonlinear mapping relation between case coverage information and an execution result by combining a parallel debugging theory. And finally, designing a virtual test suite, and putting the virtual test suite into the trained model for prediction to obtain the sentence doubtful degree value ranking.
Some researchers begin to research and use unknown prophetic test cases to improve the fault location efficiency, the current researchers propose to use a small part of known prophetic test cases to train a classifier, the classifier is used for predicting the results of other unknown prophetic test cases, the predicted test cases are used for fault location, and the fault location efficiency is improved by increasing the number of the test cases. On the basis of a classifier, information entropy is adopted as a measurement standard, unknown predicted test cases are filtered, test cases capable of reducing fault positioning result information entropy (namely integral uncertainty of a fault positioning result) are selected for fault positioning, the situation that the fault positioning efficiency is reduced due to the fact that part of the test cases which are not suitable due to lack of selection of the predicted test cases in the previous method is improved, and the fault positioning efficiency is further improved.
However, the information of the classifier training process is based on the relevant information in part of the known prediction test cases, different known prediction test cases have very obvious influence on the subsequent fault location effect, and the known prediction test cases are all selected randomly. However, a small part of randomly selected test cases hardly contain enough program information, and the information obtained by completely depending on randomly selecting a small part of test cases brings great uncertainty to fault location.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a test case screening method, aiming at solving the problems of how to select a proper test case to obtain a test prediction and judge an execution result, and the test case is used for training a classifier to improve the overall fault positioning efficiency.
The complete technical scheme of the invention comprises the following steps:
a test case screening method based on KMeans clustering and similarity comprises the following steps:
step S1: executing and counting execution information of all unknown prophetic cases, vectorizing and expressing the execution information to obtain all test cases, wherein the test cases are expressed as cij, i represents the number of the test cases, and j represents the number of sentences in the test cases; cij is a vector consisting of 0 and 1;
step S2: representing all test cases represented by vectorization as a set T, wherein the set T comprises n test cases; performing Kmeans clustering on all n test cases in the set T; after clustering is completed, all test cases in the set T form k clusters;
and step S3: after clustering is completed, randomly selecting a test case in each cluster, obtaining a test prediction and judging an execution result, putting all the selected test cases into a set K, executing all the test cases in the set K and obtaining the execution result, and forming a set F by using the test cases of which the execution result is failure, namely the execution result is inconsistent with the test prediction, wherein the set F totally contains F failed test cases;
and step S4: screening similar test cases from the rest test cases in the set T aiming at the failed test cases in the set F; specifically, for each failed test case in the set F, calculating the similarity between each failed test case and all the rest test cases in the set T, sequencing the similarity, and selecting the rest test cases with the similarity ranking of n/10F;
step S5: for F failed test cases contained in the set F, selecting n/10 similar test cases in total to form a set J, obtaining a test prediction and judging an execution result;
step S6: and combining the set J and the set K to obtain n/5 test cases in total, and using the n/5 known prediction test cases as input information for subsequent training classifiers.
Further, in step S1, cij =1 indicates that the corresponding test case i executes the statement j during the execution period; cij =0 indicates that the test case i does not execute statement j during execution.
Further, in the Kmeans clustering process in step S2, the number of clusters of the target is determined to be one tenth of the total number of test cases, that is, n/10, and then n/10 test cases are randomly selected from the set T as the initial clustering center.
Further, in step S2, clustering the test cases, calculating euclidean distances between each test case and each cluster center, assigning the test case to the cluster where the closest cluster center is located, and iterating and updating the cluster center of each cluster; if the centers of all the clusters remain unchanged between two consecutive iterations, clustering is completed; otherwise, return and perform another iteration and update.
Furthermore, in step S3, the set K has n/10 test cases in total.
Further, in step S4, the similarity is a cosine of an included angle between the two test cases.
Compared with the prior art, the invention has the advantages that:
the invention realizes that representative test cases are screened by adopting a Kmeans clustering method, the difference between the test cases and the cluster is as small as possible, and the difference between the test cases and other clusters is as large as possible, so that more comprehensive test coverage and execution result information is contained in a small part of the test cases; on the basis, a similarity method is adopted to select a similar test case of a failed test case, the software fault location core is used for distinguishing whether the statement contains a fault, the test case which is very similar to the failed test case can more obviously distinguish the statement containing the fault, and the fault location efficiency can be improved. Moreover, the test case screening result obtained by the method of the embodiment can be well combined with other methods for improving the fault location efficiency by using unknown predicted test cases, and is used for training classifiers and the like.
Drawings
FIG. 1 is a flow chart of a test case screening method according to the present invention;
fig. 2 is a flowchart of an embodiment of a test case screening method according to the present invention.
Detailed Description
The present invention is described in detail with reference to the following embodiments and drawings, but it should be understood that the embodiments and drawings are only for illustrative purposes and are not intended to limit the scope of the present invention. All reasonable variations and combinations that fall within the spirit of the invention are intended to be within the scope of the invention.
Fig. 1 is a flowchart of a test case screening method according to the present invention, and as shown in the figure, the test case screening method based on kmans clustering and similarity according to the present invention includes the following steps:
step 1: 144 version data sets used for screening test cases for fault location tasks are selected from 2 Java open source projects, basic description information of unknown predicted test case sets is shown in table 1, the test cases used for software fault location and corresponding statement coverage information are obtained, and the statement coverage information can be obtained through automatic tools such as GZoltar and the like to obtain n test cases in total. Counting statement coverage information of the test cases, and representing the statement coverage information in a vector form consisting of 0 and 1 to obtain all test cases represented in a vectorization mode, wherein the test cases are represented as cij, i represents the number of the test cases, and j represents the number of statements in the test cases; cij =1 indicates that the corresponding test case i executes the statement j during execution; cij =0 indicates that the test case i does not execute statement j during execution.
TABLE 1 description of information related to Java open source items
Objects Total faulty versions Average statements Average testcases Faulty versions used
Math 106 2140 165 V1-11,15-103,105,106
Mockito 38 1546 616 V12-17,22-25,27-38
Step 2: and representing all the test cases as a set T, carrying out Kmeans clustering on the test cases in the set T, and dividing all the test cases into n/10 clusters after clustering.
In the clustering process, firstly, the number of clusters of a target is determined to be n/10, and n/10 test cases are randomly selected from a set T to serve as a clustering center; clustering other test cases to ensure that the Euclidean distance between each test case and the corresponding clustering center is less than or equal to the Euclidean distance between the test case and the centers of other clusters which the test case does not belong to, and distributing the test case to the cluster of the nearest clustering center; the cluster center of each cluster is updated using equation (1). If the centers of all clusters remain unchanged between two successive iterations, clustering is complete. Otherwise, it will go back and go on another iteration and possible update. And after the clustering is finished, all the test cases in the set T form n/10 clusters.
Figure 706831DEST_PATH_IMAGE001
(1)
Wherein
Figure 539658DEST_PATH_IMAGE002
Is the center of the w-th iteration and the j-th cluster,
Figure 306757DEST_PATH_IMAGE003
is the number of test cases in the jth cluster during the w-1 iteration;
Figure 959455DEST_PATH_IMAGE004
the representation is the ith test case in the jth cluster,
Figure 985049DEST_PATH_IMAGE005
the jth cluster at the w-1 iteration.
And step 3: randomly selecting one test case in each cluster, wherein n/10 test cases are selected in total to form a set K, obtaining test predicates of the test cases, judging execution results, and selecting F test cases with the results being failure, namely the execution results are inconsistent with the test predicates to form a set F.
And 4, step 4: and screening n/10 similar test cases for all the failed test cases in the set F, namely screening n/10F similar test cases for each failed test case. Specifically, for a failed test case tf in the set F, the similarity S between the failed test case tf and a certain remaining test case T in the set T is calculated according to the formula (2), where the similarity S is the cosine of an included angle between corresponding vectors of two test cases, that is:
Figure 141224DEST_PATH_IMAGE006
(2)
wherein,
Figure 469437DEST_PATH_IMAGE007
for test case vectors
Figure 484798DEST_PATH_IMAGE008
And test case vectors
Figure 189448DEST_PATH_IMAGE009
Cosine of the included angle therebetween;
Figure 855922DEST_PATH_IMAGE010
for test case vectors
Figure 89457DEST_PATH_IMAGE011
The ith element in (1);
Figure 592114DEST_PATH_IMAGE012
for test case vectors
Figure 834876DEST_PATH_IMAGE013
The ith element in (1).
And 5: according to the mode, for each failed test case in the set F, the similarity between the failed test case and each test case left in the set T is calculated and sequenced, the test case of n/10F before sequencing is selected according to the sequencing result and taken out, finally, n/10 similar test cases are selected together to form a set J, and the test prediction judgment execution result is obtained.
Step S6: and combining the set J and the set K to obtain n/5 test cases in total, and using the n/5 test cases and the execution result as input information for subsequent training classifiers.
The method specifically comprises the steps of utilizing a known prediction test case training classifier to predict the execution result of the unknown prediction test case, and utilizing the information entropy to screen the predicted test case for finally positioning the fault. And merging the set K and the set J, wherein n/5 test cases are used as known prophetic test cases, and the rest test cases in the set T are used as unknown prophetic test cases. The method is applied to a target fault positioning algorithm and used for improving the fault positioning efficiency. The method for fault location needs to combine with a fault location algorithm based on a program spectrum, and four types of classical fault location algorithms are selected in the embodiment: tarrantula, barinel, ochiai, jaccard, cross-comparisons were performed to verify the performance of the method.
The efficiency of fault location after the flow shown with reference to fig. 2 is compared with the experimental result (marked as Comparison) of fault location rate after random screening without using the method (marked as Prefilter) to complete the performance of the post-evaluation method.
And evaluating the fault positioning efficiency by using an EXAM score, wherein the EXAM score represents the proportion of the number of program statements needing to be checked to the total number of statements in the fault program when the fault is positioned successfully to be positioned most easily. A higher EXAM score indicates a less efficient positioning activity; a lower EXAM score indicates a more efficient location activity.
TABLE 2 mean EXAM score for different scenarios using four fault localization algorithms
Figure 965643DEST_PATH_IMAGE014
As can be seen from the data in the table, in the verification data set, the method improves the average fault positioning efficiency in 8 scenes. Under the four fault location algorithms, when the algorithm provided by the invention is adopted, the average EXAM score of the Math data set is 0.0388, and compared with the average score of 0.0426 in the comparison technology, the fault location efficiency is improved by 8.91%; the average EXAM score of the Mockito data set is 0.0487, and compared with the average score of 0.0584 of the comparison technology, the fault location efficiency is improved by 16.6%. From the average results under the eight scenes, the method has obvious improvement on the fault positioning efficiency, the average improvement reaches 13.37%, and the method can be suitable for various traditional fault positioning algorithms.
By the method, the test cases are effectively screened, the selected test cases have better representativeness, and the fault positioning efficiency can be effectively improved after the selected test cases are combined with the fault positioning algorithm; moreover, the screening result obtained by the method of the embodiment can be combined with various fault positioning methods, and has stable performance.
In addition, the embodiment also discloses a fault positioning method for carrying out similarity weighting on test cases obtained by screening by using the method, which specifically comprises the following steps:
step (1): forming a known test prediction case set T by using the test cases based on KMeans clustering and similarity obtained in the previous step L Training and obtaining a test prediction classifier, and obtaining sentence suspicion degree sequencing by utilizing a known test prediction case set;
the statement suspicion degree is calculated in the following mode:
Figure 760293DEST_PATH_IMAGE015
wherein,
Figure 874879DEST_PATH_IMAGE016
in order to be in the question of the degree of doubt,
Figure 531120DEST_PATH_IMAGE017
is the coverage spectrum s and the number of test cases that failed to execute;
Figure 47552DEST_PATH_IMAGE018
the number of test cases which cover the spectrum s and pass execution;
Figure 4314DEST_PATH_IMAGE019
is not covering spectrum s and contains the number of test cases that failed to execute;
Figure 340617DEST_PATH_IMAGE020
is the number of test cases that do not cover the spectrum s and pass execution.
Step (2): predicting unknown prediction test case set T by using test prediction classifier obtained in step 1 UL The execution result of (1);
and (3): calculating a first information entropy by using sentence suspicion degree sequencing obtained by the known test prediction case set, adding a program spectrum of each unknown prediction test case into the known test prediction case set, calculating a second information entropy of newly generated sentence suspicion degree sequencing, comparing the second information entropy with the first information entropy, and adding the unknown prediction test case into an available test case set if the second information entropy is smaller than the first information entropy; if the second information entropy is smaller than the first information entropy, removing the unknown prediction test case, and obtaining an available unknown prediction test case set according to the following formula;
the entropy calculation uses an information entropy mode as follows:
Figure 800548DEST_PATH_IMAGE021
Figure 905908DEST_PATH_IMAGE022
in order to calculate the result for the entropy of the information,
Figure 917726DEST_PATH_IMAGE023
is a set of sentences, comprising n sentences,
Figure 600380DEST_PATH_IMAGE024
as a set of sentences
Figure 988636DEST_PATH_IMAGE023
In the sentence of the ith item in the above paragraph,
Figure 89447DEST_PATH_IMAGE025
as a sentence
Figure 6588DEST_PATH_IMAGE024
The degree of doubt.
And (4): similarity calculation is carried out on the obtained available unknown prediction test case set and a preset known failure test case; the similarity method among the test cases comprises the following steps:
Figure 583062DEST_PATH_IMAGE026
wherein,
Figure 634064DEST_PATH_IMAGE027
for vector representation of known failed test case program spectral execution information,
Figure 448436DEST_PATH_IMAGE028
a vector representation of the information is executed for the selected program spectrum of the unknown predicted test case. From this, the similarity of the unknown prophetic test case and the known failure case is obtained
Figure 943003DEST_PATH_IMAGE029
And (5): the fault positioning weighting algorithm is used for carrying out similarity weighting on the obtained unknown prediction test case set in combination with the prediction result, and adding the known test cases for carrying out suspicion calculation to obtain statement suspicion degree and suspicion degree sequence of similarity weighting;
weighted correction factor
Figure 475615DEST_PATH_IMAGE030
As follows:
Figure 205674DEST_PATH_IMAGE031
wherein:
Figure 264765DEST_PATH_IMAGE032
Figure 789288DEST_PATH_IMAGE033
the weight value under the test case i is unknown, namely the similarity of the test case i and the failure case spectrum information;
Figure 950142DEST_PATH_IMAGE034
representing unknown prediction test case i s statement not executed and predictionThe value is 1 when the measurement result is that the test result is passed;
Figure 952733DEST_PATH_IMAGE035
the value is 1 when s statements under the unknown prediction test case i are not executed and the prediction result is failure;
Figure 7276DEST_PATH_IMAGE036
representing that the value is 1 when the s statement of the unknown prediction test case i is executed and the prediction result is passed;
Figure 95843DEST_PATH_IMAGE037
the value is 1 when s statement execution under unknown prediction test case i and prediction result is failure.
And after the similarity weighting is carried out on the available unknown prophetic test case set, adding the known test case for suspicion degree calculation, thereby obtaining the sentence suspicion degree and suspicion degree sequence of the similarity weighting.
And (6): and (5) checking the sentences by utilizing the sequence of the suspiciousness obtained in the step (S5) until the fault sentences are positioned.
The above applications are only some embodiments of the present application. It will be apparent to those skilled in the art that various changes and modifications can be made without departing from the inventive concept herein, and it is intended to cover all such modifications and variations as fall within the scope of the invention.

Claims (6)

1. A test case screening method based on KMeans clustering and similarity is characterized by comprising the following steps:
step S1: executing and counting execution information of all unknown prophetic cases, and vectorizing and representing to obtain all test cases, wherein the test cases are represented as cij, i represents a test case number, and j represents a number of a statement in the test case; cij is a vector consisting of 0 and 1;
step S2: representing all test cases represented by vectorization as a set T, wherein the set T comprises n test cases; performing Kmeans clustering on all n test cases in the set T; after clustering is completed, all test cases in the set T form k clusters;
and step S3: after clustering is finished, randomly selecting a test case in each cluster, putting all the selected test cases into a set K, obtaining test predictions of the test cases, judging execution results, and forming a set F with the execution results being failed, namely the test cases with the execution results inconsistent with the test predictions, wherein the set F totally contains F failed test cases;
and step S4: screening similar test cases from the rest test cases in the set T aiming at the failed test cases in the set F; specifically, for each failed test case in the set F, calculating the similarity between each failed test case and all the rest test cases in the set T, sequencing the similarity, and selecting the rest test cases with the similarity ranking of n/10F;
step S5: for F failed test cases contained in the set F, selecting n/10 similar test cases in total to form a set J, obtaining a test prediction and judging an execution result;
step S6: and combining the set J and the set K to obtain n/5 known prediction test cases, and using the n/5 test cases as input information for subsequent training classifiers.
2. The method for screening the test cases based on the KMeans clustering and the similarity as claimed in claim 1, wherein in step S1, cij =1 indicates that the corresponding test case i executes a statement j during the execution period; cij =0 indicates that the test case i does not execute statement j during execution.
3. The method for screening test cases based on KMeans clustering and similarity according to claim 2, wherein in the Kmeans clustering process in step S2, the number of clusters of targets is determined to be one tenth of the total number of test cases, namely n/10, and then n/10 test cases are randomly selected from the set T as initial clustering centers.
4. The method for screening test cases based on KMeans clustering and similarity according to claim 3, wherein in step S2, the test cases are clustered, for each test case, the Euclidean distance between the test case and each clustering center is calculated, the test case is allocated to the cluster where the nearest clustering center is located, and the clustering center of each cluster is iterated and updated; if the centers of all the clusters remain unchanged between two continuous iterations, the clustering is finished; otherwise, return and perform another iteration and update.
5. The method for screening the test cases based on the KMeans clustering and the similarity as claimed in claim 4, wherein in step S3, the set K has n/10 test cases in total.
6. The method for screening the test cases based on the KMeans clustering and the similarity, according to claim 5, wherein in step S4, the similarity is the cosine of an included angle between two test cases.
CN202211652532.4A 2022-12-22 2022-12-22 Test case screening method based on KMeans clustering and similarity Active CN115629998B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211652532.4A CN115629998B (en) 2022-12-22 2022-12-22 Test case screening method based on KMeans clustering and similarity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211652532.4A CN115629998B (en) 2022-12-22 2022-12-22 Test case screening method based on KMeans clustering and similarity

Publications (2)

Publication Number Publication Date
CN115629998A true CN115629998A (en) 2023-01-20
CN115629998B CN115629998B (en) 2023-03-10

Family

ID=84910276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211652532.4A Active CN115629998B (en) 2022-12-22 2022-12-22 Test case screening method based on KMeans clustering and similarity

Country Status (1)

Country Link
CN (1) CN115629998B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115809204A (en) * 2023-02-09 2023-03-17 天翼云科技有限公司 SQL injection detection test method, device and medium for cloud platform WAF
CN116401678A (en) * 2023-06-08 2023-07-07 中汽智联技术有限公司 Construction and extraction method of automobile information security test case

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101866317A (en) * 2010-06-29 2010-10-20 南京大学 Regression test case selection method based on cluster analysis
CN106598850A (en) * 2016-12-03 2017-04-26 浙江理工大学 Error locating method based on program failure clustering analysis
CN106776335A (en) * 2016-12-29 2017-05-31 中车株洲电力机车研究所有限公司 A kind of test case clustering method and system
CN110515837A (en) * 2019-07-31 2019-11-29 杭州电子科技大学 A kind of Test Case Prioritization method based on EFSM model and clustering
US20210263842A1 (en) * 2020-02-20 2021-08-26 Accenture Global Solutions Limited Software test case sequencing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101866317A (en) * 2010-06-29 2010-10-20 南京大学 Regression test case selection method based on cluster analysis
CN106598850A (en) * 2016-12-03 2017-04-26 浙江理工大学 Error locating method based on program failure clustering analysis
CN106776335A (en) * 2016-12-29 2017-05-31 中车株洲电力机车研究所有限公司 A kind of test case clustering method and system
CN110515837A (en) * 2019-07-31 2019-11-29 杭州电子科技大学 A kind of Test Case Prioritization method based on EFSM model and clustering
US20210263842A1 (en) * 2020-02-20 2021-08-26 Accenture Global Solutions Limited Software test case sequencing

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115809204A (en) * 2023-02-09 2023-03-17 天翼云科技有限公司 SQL injection detection test method, device and medium for cloud platform WAF
CN116401678A (en) * 2023-06-08 2023-07-07 中汽智联技术有限公司 Construction and extraction method of automobile information security test case
CN116401678B (en) * 2023-06-08 2023-09-01 中汽智联技术有限公司 Construction and extraction method of automobile information security test case

Also Published As

Publication number Publication date
CN115629998B (en) 2023-03-10

Similar Documents

Publication Publication Date Title
CN115629998B (en) Test case screening method based on KMeans clustering and similarity
CN106201871B (en) Based on the Software Defects Predict Methods that cost-sensitive is semi-supervised
CN108563555B (en) Fault change code prediction method based on four-target optimization
Lanubile et al. Evaluating predictive quality models derived from software measures: lessons learned
CN108491302B (en) Method for detecting spark cluster node state
CN109783349B (en) Test case priority ranking method and system based on dynamic feedback weight
CN105653450A (en) Software defect data feature selection method based on combination of modified genetic algorithm and Adaboost
CN112699054B (en) Ordered generation method for software test cases
CN111782532B (en) Software fault positioning method and system based on network abnormal node analysis
CN111950645A (en) Method for improving class imbalance classification performance by improving random forest
CN111880957A (en) Program error positioning method based on random forest model
Baras et al. Automatic boosting of cross-product coverage using Bayesian networks
CN109376080B (en) Time-adaptive automatic defect positioning method and device
CN115422092A (en) Software bug positioning method based on multi-method fusion
CN114780967B (en) Mining evaluation method based on big data vulnerability mining and AI vulnerability mining system
CN107957944B (en) User data coverage rate oriented test case automatic generation method
CN115080386A (en) Scene effectiveness analysis method and device based on automatic driving function requirement
CN115239122A (en) Digital power grid software project tester recommendation method and device
CN114416524A (en) File error positioning method and device
CN115934558A (en) Similarity weighted fault positioning method using label-free test case
CN113434408B (en) Unit test case sequencing method based on test prediction
CN103744789B (en) Method of locating software errors by 3D surface representation
Xiao et al. A Systematic Literature Review on Test Case Prioritization and Regression Test Selection
Saghari et al. Human Judgment Simulation and KDD Techniques in Automotive Platform Benchmark Selection
CN113836027B (en) Method for generating failure test case by using generation type network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant