CN110134588B - Test case priority ordering method and test system based on code and combination coverage - Google Patents

Test case priority ordering method and test system based on code and combination coverage Download PDF

Info

Publication number
CN110134588B
CN110134588B CN201910302282.3A CN201910302282A CN110134588B CN 110134588 B CN110134588 B CN 110134588B CN 201910302282 A CN201910302282 A CN 201910302282A CN 110134588 B CN110134588 B CN 110134588B
Authority
CN
China
Prior art keywords
test case
coverage
test
case
case set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910302282.3A
Other languages
Chinese (zh)
Other versions
CN110134588A (en
Inventor
黄如兵
张犬俊
陈锦富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN201910302282.3A priority Critical patent/CN110134588B/en
Publication of CN110134588A publication Critical patent/CN110134588A/en
Application granted granted Critical
Publication of CN110134588B publication Critical patent/CN110134588B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a test case priority ordering method and a test system based on code and combined coverage, wherein the method comprises the following steps: 1. performing regression testing, and obtaining dynamic code coverage information of the test case set running on the original program set in version iteration; 2. sequencing the use case sets by using the dynamic code coverage information, namely sequencing the coverage information of the granularity and the combination condition of each unit in the granularity based on the use case sets; 3. running the test case set on the error version, and comparing the running result with the result on the basic version program set to generate a case set error detection matrix; 4. use the ordered use case set and error detection matrix to calculate the use case sequence effective value, and compare and evaluate with other classical ordering strategies. Aiming at the limitations of the existing sorting algorithm, the invention considers the combined coverage information of each statement unit, and carries out fusion association on the combined coverage information, so as to obtain richer sorting information, thereby greatly improving the error detection efficiency.

Description

Test case priority ordering method and test system based on code and combination coverage
Technical Field
The invention belongs to the field of software testing, and relates to a test case priority ordering method and a test system based on code and combination coverage.
Background
From the software development lifecycle we can find that software engineering is not only software development but also programming language, and software testing occupies a large part, especially in software iterative regression testing, in fact, regression testing is an effective method, which can ensure local modification of codes to bring errors in certain positions to the tested program, and data show that regression testing generally occupies about 80% of the software testing budget and occupies half of the whole software maintenance budget. Therefore, in order to reduce the cost of regression testing, a series of maintenance-based techniques for testing are proposed in the industry and academia, including test case sorting TCP (Test Case Prioritization), test case selection TCS (Test Case Selection), test case reduction TCM (Test Case Minimization), and the like.
Test case ordering TCP is undoubtedly a hotspot in the test research today, which was first proposed by Wong et al in 1997, based on the unordered execution of traditional test cases, an idea of optimizing test case ordering to improve regression test efficiency was proposed, so that the more likely to find a program error case can be executed earlier, so that the earlier the error is found to be repaired, and the criterion is to order all programs according to a certain principle according to the priority level, and test.
The ranking based on code overlay test cases was proposed around the beginning of the 21 st century, where Rothermel and Elbaum professor et al published 4 articles of foundation between 1999 and four 2002, forming the main framework based on code overlay testing.
While they give a general description of the test case ordering TCP problem:
given a full permutation set PT of test case sets T, an ordering objective function f, whose domain is PT and value domain is real.
Problem T E PT, make
Where PT represents all possible permutations of T, and the f-function is to input a given permutation and output a value proportional to the permutation result to represent the manifestation of the permutation.
First, rothermel et al put forward two greedy algorithm ideas of total and additional based on statement coverage granularity and branch coverage granularity, and compare them with unorder, random and optimal, and verify the superiority of total and additional relative to random test through practice. Based on 1999 paper, elbaum et al made some improvements, first increasing the function coverage coarse granularity relative to the statement and branch coverage granularity, confirming that the coarse granularity ordering method can reduce program overhead, but also reduces ordering effect. Rothermel et al also considered the use of software metrics to further improve the effectiveness of TCP techniques, they applied FEP (fault exposing potential) values of test cases to related greedy algorithms for the first time, FEP values could be used to estimate the defect detection capabilities of test cases, which calculate PIE models and variance test analyses. The PIE model considers the test cases of the intrinsic defects of the detectable program, and 3 conditions need to be satisfied: (1) the test case executes a statement containing a defect; (2) causing an error in the internal state of the program; (3) The output of the program is affected by the internal state of the propagation error. Mutation testing is an effective means for evaluating the sufficiency of a test case set, and a large number of variants can be generated by performing simple code modification (i.e., mutation operator) conforming to grammar constraints on a tested program, and when the execution behaviors of the test case on the variants and the original program are inconsistent, the test case is said to be capable of detecting the variants. Taking sentence coverage as an example, given a tested program P and a test case t, firstly, calculating the ms value of a sentence s (s epsilon P), wherein the calculation formula is as follows
Wherein, the variables (S) return the number of variants obtained by applying a group of mutation operators to the sentence S, the killed (S, t) returns the number of the test case t which can be detected in the generated variants, and the FEP value of the test case is sigma assuming that the sentence set covered by the test case t is S s∈S ms (s, t). Rothermel tested FEP values on statement and branch overlay particle sizes, and Elbaum et al calculated on method particle sizes. Meanwhile, because the difference between the proposed algorithm and the ideal optimal sorting effect is large, the ELbaum et al also combines the FEP value with the defect tendency value (such as the fault index value and the Diff value) to propose a series of Diff-FEP strategies. Taking a Diff value, an FEP value and a Total policy as examples, firstly sequencing test cases according to the Diff value, and then sequencing test cases with the same Diff value according to the FEP value.
Jeffrey and Gupta et al solve the TCP problem by means of program slicing, they propose a new TCP technique based on a relevant slice (releas slice) of test case output, and apply the relevant slice to identify all sentences or branches that may affect the program output result, and consider only the coverage of these sentences or branches in the sorting. Elbaum et al consider that just considering coverage of program entities is not sufficient when ordering test cases, they further consider the execution overhead of test cases and the extent of damage to defects when ordering. In the demonstration study, they considered different distributions of test case execution overhead and defect hazard levels, wherein the distributions of test case execution overhead include uniform distribution, random distribution, normal distribution, distribution taken from Mozilla open source project, and distribution taken from QTP application. While the distribution of defect hazard programs includes a uniform distribution and a related distribution taken from Mozilla open source projects. Zhao Jianjun et al can estimate the defect tendency and importance of each module by analyzing the internal structure of the program under test and can be used to guide the sequencing of test cases.
While researchers have proposed a large number of TCP techniques, it is equally interesting to select the appropriate TCP technique from among the test scenario features. Elbaum et al investigated the effect of different test scenarios (main investigation factors include tested program features, test case features, code modification types, etc.) on the effectiveness of TCP technology through empirical studies. The verification result provides an important basis for a tester to select a proper TCP technology under different test scenes. Arareen et al believe that the type of code modification at each evolution has an impact on the choice of TCP technology during the evolution of the software. The results indicate that the approach they propose can select the most cost-effective TCP technique for each regression testing campaign.
Currently, TCP technology is mainly based on source code, historical execution information, requirements, models, and the like, and is specifically classified into a code-based ordering technology, a search-based ordering technology, a demand-based ordering technology, a historical ordering technology, and the like. The code-based ranking technique is still the mainstream in terms of the number of related papers in the past year, and research is increasingly focused on search ranking.
Disclosure of Invention
In order to more effectively sort test cases according to dynamic code coverage information, the invention provides a test case priority sorting method and a test system based on code and combination coverage. In addition, the method comprises the following steps. The invention also compares with other classical test case ordering methods, and verifies the effectiveness and the advancement of the proposed method. The technical scheme of the test case priority ordering method comprises the following steps:
step 1, collecting code coverage information of a test case set according to the running condition of the test case set on a program set of a basic version;
step 2, running the test case set on the iterative error version to obtain error detection conditions of the test case set on each version;
step 3, using a code-based and combined coverage ordering method, ordering test cases according to the obtained code coverage information, and outputting an ordered test case sequence;
and 4, calculating an evaluation value of the test case sequence and carrying out statistical analysis according to the ordered test case sequence and the error detection condition of the test case set.
The specific steps of the step 1 are as follows:
step 1.1, converting the acquired test case set into a test case script according to the acquired test case set, and outputting code coverage information of the test case set;
step 1.2, executing the test case script on the program set of the basic version to obtain a coverage information file of the test case set;
and 1.3, compiling a related analysis script, analyzing the coverage information file to generate a coverage matrix of the test case set on the basic version program set, wherein each row represents the test case, each column represents each coverage granularity of the program set, the matrix element value is 0 or 1,0 represents that the test case does not cover the granularity, and 1 represents that the test case covers the granularity.
The specific steps of the step 2 are as follows:
step 2.1, converting the acquired test case set into a corresponding test case script according to the acquired test case set, and outputting an execution result of the test case set on the program set;
step 2.2, running the test case script on the program sets of the basic version and the related iterative version respectively to obtain the output information of the test case set on each version program;
and 2.3, compiling a related script, comparing output information of the test case set on the basic version and the error iteration version, if the running results of the test case set on the basic version and the error version are the same, indicating that the test case set cannot detect the error, if the running results are different, indicating that the test case set can detect the error, and generating an error detection matrix FaultMatrix of the test case set, wherein each row represents the test case, each column represents the implanted error, the element value is 0 or 1,0 represents that the test case does not detect the error, and 1 represents that the test case detects the error.
The specific steps of the step 3 are as follows:
step 3.1, firstly selecting a first test case, selecting the test case with the largest coverage granularity unit according to the coverage information of each case in the set of to-be-sequenced cases, and randomly selecting if a plurality of cases meet the condition.
And 3.2, setting a combined coverage dimension, and calculating an evaluation value of each test case according to the set dimension. Taking the function coverage granularity as an example, m function units MC { MC) are arranged for the program P to be tested 1 ,mc 2 ,mc 3 ,…mc m The coverage granularity has only two states, mc i = {0,1} (0.ltoreq.i.ltoreq.m), there is a test case set T { T > with a length n 1 ,t 2 ,t 3 ,…t n There is a set of ordered cases ST { ST } of length s 1 ,st 2 ,st 3 ,…st s Presence of test case set CT { CT } to be ordered with length c 1 ,ct 2 ,ct 3 ,…dt c Then for each use case, dimension t is set, for which granularity existsSeed coverage combinations, since each cell may take a value of 1 or 0,1 representing covered and 0 representing uncovered, there is 2 per combination t The seed may take value, then ct for the test case to be selected i Its evaluation value is d (ct i ,S)=CombSet(ct i ) U.S. Combset (ST), wherein Combset (ct) i ) Represent test case ct in t dimension i Is a combined coverage situation of (2); combSet (ST) represents the combined coverage of test case set ST in the t dimension.
Step 3.3, according to step 3.2, calculating the evaluation values of all the use cases to be sorted in sequence, and if all the use case evaluation values are 0, turning to step 3.4; if the multiple use cases have the maximum evaluation value, turning to the step 3.5; otherwise, turning to step 3.6.
And 3.4, if all the test case evaluation values to be sequenced in a certain round are 0, selecting one test case to be sequenced according to the step 3.1, adding the test case to be sequenced into the sequenced case set, deleting the test case from the sequenced case set, and resetting the combined coverage condition.
Step 3.5, if a certain round of multiple cases have the same maximum evaluation value, selecting a case with the maximum evaluation value according to the step 3.1, adding the case into the ordered case set, and deleting the case from the case set to be ordered;
step 3.6, selecting the use case with the maximum evaluation value from the use case set to be tested, adding the ordered use case set and deleting the use case from the use case set to be ordered;
and 3.7, repeatedly executing the steps 3.2-3.6, ending the sequencing and outputting the sequenced case sequence if no candidate test case exists.
The specific steps of the step 4 are as follows:
step 4.1, repeating the step 3 on the granularity of a function (function) by using the one-dimensional coverage combination of the method to generate a sequenced test case sequence of the method;
step 4.2, sequencing the test cases on the granularity of a function (function) by using a greedy algorithm, namely Total and Additional strategies, so as to generate a sequenced test case sequence of each method;
step 4.3, generating an evaluation value of the sequencing result of each method according to the sequenced test case sequence of each method and the error coverage matrix FaultMatrix of the test case set, wherein taking the average error detection rate APFD (average percentage of fault detection) as an example, the evaluation value formula is as follows: where n represents the number of test cases, m represents the number of errors in the program, TF i Representing the execution order of the first test case in the order in which the ith error is detected;
step 4.4, carrying out graphical display on the obtained APFD values, and drawing box diagrams of each method under each granularity, so that visual display and comparison are convenient;
and 4.5, carrying out statistical analysis on the obtained APFD values, calculating p-values (wilcox p-value) and calculated effect size (effect size) of Wilkinson rank sum test of the method and the traditional method, and evaluating the method according to the results.
Furthermore, the invention also designs a prototype testing system for realizing the test case priority ordering method based on code and combination coverage; the test system main interface includes 6 menu items: file options, algorithm parameter setting options, tested program setting options, result analysis setting options, statistical result analysis options and graphic viewing options;
the algorithm parameter setting options comprise selecting an algorithm needing to be ranked, such as a Total ranking strategy, an Additional ranking strategy and a method ranking strategy, setting the execution times, needing to specify the cycle execution times of the ranking method, outputting whether the ranking consumes time or not, and the like;
the remaining 5 menu items represent 5 functional modules, respectively: the file option module is used for selecting the file address of the project, the tested program set contained in the file address, the information such as the case set during testing and the like; the tested program setting option is used for specifying which programs under the using path of the experiment, and a basic version and an iterative error version of the programs; the result analysis setting options show specific error detection conditions of the sorting strategies of the methods and generate specified evaluation values, such as APFD values; the statistical result analysis option calculates the p value of wilcox of the method and the traditional method according to the evaluation value of the result analysis module, judges whether obvious differences exist between the methods, calculates the effect size between every two methods, and judges the quality comparison between the method and the traditional method; the graphic viewing option firstly graphically displays the evaluation value (such as APFD value) of the result analysis module, conveniently and intuitively compares the methods mainly in a box diagram form, then performs pairwise contrast display on the result of the statistical result analysis module in a table form, and highlights the data with no significant difference of p value and effect size lower than 0.5.
The invention has the beneficial effects that:
1. the invention realizes the priority sorting of the test cases based on the code and the combined coverage, and further digs the obtained dynamic coverage information according to the dynamic coverage information of the test cases on the program and the combined condition among all granularity units of the program, and uses richer original test data, thereby greatly improving the error detection rate of the test case set.
2. Based on the function granularity, siemens and Unix program sets are used, and compared with Total, additional classical ordering strategies, the superior performance of the method under programs of different scales is verified.
3. The design realizes a test case priority ordering prototype system based on code and combination coverage, and the system well realizes automatic test, improves test efficiency and plays an important role in the field of test case ordering.
4. The invention obtains the dynamic coverage information with the specified granularity, extracts isolated units from the dynamic coverage information to be combined, and uses the richer inter-unit association information to sort, thereby obviously improving the error finding efficiency of the test case and reducing the time cost and the labor cost in the software test. I.e. before each round of sorting is started, according to the combined coverage condition of the granularity, calculating the evaluation value of the use cases to be sorted again, selecting the use case with the maximum evaluation value and adding the use case with the maximum evaluation value into the sorted use case set.
Drawings
FIG. 1 is a flow chart of a method of prioritizing test cases based on code and combination coverage.
Fig. 2 is a flow chart for collecting code coverage information.
FIG. 3 is a flow chart for collecting error matrices.
FIG. 4 is a code-based and combined coverage ordering flow diagram.
Fig. 5 is a graphical display interface diagram.
Fig. 6 is a diagram of a statistical analysis interface.
Detailed Description
The invention is further described in connection with the accompanying drawings and the embodiments, it being noted that the described embodiments are only intended to facilitate an understanding of the invention and are not intended to limit the invention in any way.
The invention aims to solve the problem of test case sequencing based on code coverage, provides a test case priority sequencing method based on code and combination coverage, effectively improves the efficiency of error discovery of test case sets in testing, provides a perfect test framework and algorithm, and performs full experiments to prove the feasibility and effectiveness of the method.
First, several concepts to which the present invention relates are defined as follows.
Definition 1 test case: a test case is a set of test inputs, execution conditions, and expected results that are formulated to ensure a certain goal.
Definition 2 test case ordering: a complete array set PT of the test case sets T and T is given, an ordering objective function f is defined as PT, a value field is a real number, T E PT, make
Where PT represents all possible permutations of T, and the f-function is to input a given permutation and output a value proportional to the permutation result to represent the manifestation of the permutation.
Definition 3 code coverage: code coverage refers to the degree of duty cycle of the code that has been executed in the total code that needs to be executed in the test.
Definition 4 statement overlay: also known as code line overlay, segment overlay, and basic block overlay, is the most common method of code overlay, primarily for measuring how much each executable statement in the source code of a program has been executed.
Definition 5 branch overlay: also known as decision coverage. It is used mainly to measure how far each reachable decision branch in the program has been executed.
Definition 6 of conditional coverage: the results true and false of all the sub-expressions that are mainly used for each predicate in the metrology program are tested to a degree.
Define 7 path coverage: also known as predicate coverage, is used to measure how far each branch of a function has been executed in a program, because all possible branches are executed in one pass, when multiple branch nests are faced, multiple branches need to be arranged and combined, so their test paths increase exponentially with the number of branches.
As shown in fig. 1, a test case prioritization method based on code and combination coverage of the present invention includes:
and step 1, collecting code coverage information of the test case set according to the running condition of the test case set on the program set of the basic version.
In the above step 1, referring to fig. 2, the procedure of collecting the program code coverage information is as follows:
step 1.1, converting the acquired test case set into a test case script according to the acquired test case set, and outputting code coverage information of the test case set;
step 1.2, executing the test case script on the program set of the basic version to obtain a coverage information file of the test case set;
and 1.3, compiling a related analysis script, analyzing the coverage information file to generate a coverage matrix of the test case set on the basic version program set, wherein each row represents the test case, each column represents each coverage granularity of the program set, the matrix element value is 0 or 1,0 represents that the test case does not cover the granularity, and 1 represents that the test case covers the granularity.
And step 2, running the test case set on the iterative error version to obtain the error detection condition of the test case set on each version.
In the above step 2, referring to fig. 3, the use case error matrix generation step based on the specified program set is as follows:
step 2.1, converting the acquired test case set into a corresponding test case script according to the acquired test case set, and outputting an execution result of the test case set on the program set;
step 2.2, running the test case script on the program sets of the basic version and the related iterative version respectively to obtain the output information of the test case set on each version program;
and 2.3, compiling a related script, comparing output information of the test case set on the basic version and the error iteration version, if the running results of the test case set on the basic version and the error version are the same, indicating that the test case set cannot detect the error, if the running results are different, indicating that the test case set can detect the error, and generating an error detection matrix FaultMatrix of the test case set, wherein each row represents the test case, each column represents the implanted error, the element value is 0 or 1,0 represents that the test case does not detect the error, and 1 represents that the test case detects the error.
And step 3, using a combined coverage ordering method, ordering the test cases according to the obtained code coverage information, and outputting the ordered test case sequence.
In the step 3, referring to fig. 4, the case set ordering method includes the following steps:
step 3.1, firstly selecting a first test case, selecting the test case with the largest coverage granularity unit according to the coverage information of each case in the set of to-be-sequenced cases, and randomly selecting if a plurality of cases meet the condition.
And 3.2, setting a combined coverage dimension, and calculating an evaluation value of each test case according to the set dimension. Taking the function coverage granularity as an example, for the program P to be tested, m function units MC { MC are provided 1 ,mc 2 ,mc 3 ,…mc m The coverage granularity has only two states, mc i = {0,1} (0.ltoreq.i.ltoreq.m), there is a test case set T { T > with a length n 1 ,t 2 ,t 3 ,…t n There is a set of ordered cases ST { ST } of length s 1 ,st 2 ,st 3 ,…st s Presence of lengthTest case set CT { CT to be ordered with degree of c 1 ,ct 2 ,ct 3 ,…dt c Then for each use case, dimension t is set, for which granularity existsSeed-covered combinations, each combination having 2, since each cell may take on a value of 1 (covered) or 0 (uncovered) t The seed may take value, then ct for the test case to be selected i Its evaluation value is d (ct i ,S)=CombSet(ct i ) U.S. Combset (ST), wherein Combset (ct) i ) Represent test case ct in t dimension i Is a combined coverage situation of (2); combSet (ST) represents the combined coverage of test case set ST in the t dimension.
Step 3.3, according to step 3.2, calculating the evaluation values of all the use cases to be sorted in sequence, and if all the use case evaluation values are 0, turning to step 3.4; if the multiple use cases have the maximum evaluation value, turning to the step 3.5; otherwise, turning to step 3.6.
And 3.4, if all the test case evaluation values to be sequenced in a certain round are 0, selecting one test case to be sequenced according to the step 3.1, adding the test case to be sequenced into the sequenced case set, deleting the test case from the sequenced case set, and resetting the combined coverage condition.
Step 3.5, if a certain round of multiple cases have the same maximum evaluation value, selecting a case with the maximum evaluation value according to the step 3.1, adding the case into the ordered case set, and deleting the case from the case set to be ordered;
step 3.6, selecting the use case with the maximum evaluation value from the use case set in the test to be selected, adding the ordered use case set and deleting the use case set from the use case set to be ordered
And 3.7, repeatedly executing the steps 3.2-3.6, ending the sequencing and outputting the sequenced case sequence if no candidate test case exists.
And 4, calculating an evaluation value of the sequence according to the ordered test case sequence and the error detection condition of the test case set, and carrying out statistical analysis.
In the step 4, the ranking method evaluation data generating step is as follows:
step 4.1, repeating the step 3 on the granularity of a function (function) by using the one-dimensional coverage combination of the method to generate a sequenced test case sequence of the method;
step 4.2, sequencing the test cases on the granularity of a function (function) by using a greedy algorithm, namely Total and Additional strategies, so as to generate a sequenced test case sequence of each method;
step 4.3, generating an evaluation value of the sequencing result of each method according to the sequenced test case sequence of each method and the error coverage matrix FaultMatrix of the test case set, wherein taking the average error detection rate APFD (average percentage of fault detection) as an example, the evaluation value formula is as follows: where n represents the number of test cases, m represents the number of errors in the program, TF i Representing the execution order of the first test case in the order in which the ith error is detected;
step 4.4, carrying out graphical display on the obtained APFD values, and drawing box diagrams of each method under each granularity, so that visual display and comparison are convenient;
and 4.5, carrying out statistical analysis on the obtained APFD values, calculating p-values (wilcox p-value) and calculated effect size (effect size) of Wilkinson rank sum test of the method and the traditional method, and evaluating the method according to the results.
The invention also designs a prototype testing system for realizing the test case priority ordering method based on code and combination coverage; the test system main interface includes 6 menu items: file options, algorithm parameter setting options, tested program setting options, result analysis setting options, statistical result analysis options and graphic viewing options;
the algorithm parameter setting options comprise selecting an algorithm needing to be ranked, such as a Total ranking strategy, an Additional ranking strategy and a method ranking strategy, setting the execution times, needing to specify the cycle execution times of the ranking method, outputting whether the ranking consumes time or not, and the like;
the remaining 5 menu items represent 5 functional modules, respectively: the file option module is used for selecting the file address of the project, the tested program set contained in the file address, the information such as the case set during testing and the like; the tested program setting option is used for specifying which programs under the using path of the experiment, and a basic version and an iterative error version of the programs; the result analysis setting options show specific error detection conditions of the sorting strategies of the methods and generate specified evaluation values, such as APFD values; the statistical result analysis option calculates the p value of wilcox of the method and the traditional method according to the evaluation value of the result analysis module, judges whether obvious differences exist between the methods, calculates the effect size between every two methods, and judges the quality comparison between the method and the traditional method; the graphic view option firstly graphically displays the evaluation value (such as an APFD value) of the result analysis module, and conveniently and intuitively compares each method mainly in the form of a box graph, as shown in fig. 5, wherein the abscissa represents each method, and Add, tot, F and F2 represent the Additional strategy, the Total strategy and the implementation of the 1-dimensional and 2-dimensional combined coverage based on the invention respectively, and the ordinate represents the APFD value of each method; and then the results of the statistical result analysis module are displayed in a pairwise comparison mode in a table mode, as shown in fig. 6, the data with significant difference of p values and effect size higher than 0.5 are highlighted.
The above list of detailed descriptions is only specific to practical embodiments of the present invention, and they are not intended to limit the scope of the present invention, and all equivalent embodiments or modifications that do not depart from the spirit of the present invention should be included in the scope of the present invention.

Claims (5)

1. A test case prioritization method based on code and combination coverage is characterized by comprising the following steps:
step 1, collecting code coverage information of a test case set according to the running condition of the test case set on a program set of a basic version;
step 2, running the test case set on the iterative error version to obtain error detection conditions of the test case set on each version;
step 3, using a code-based and combined coverage ordering method, ordering test cases according to the obtained code coverage information, and outputting an ordered test case sequence;
the specific implementation of the step 3 is as follows:
step 3.1, firstly selecting a first test case, selecting a test case with the largest coverage granularity unit according to the coverage information of each case in a to-be-sequenced case set, and randomly selecting if a plurality of cases meet the conditions;
step 3.2, setting a combined coverage dimension, and calculating an evaluation value of each test case according to the set dimension;
step 3.3, according to step 3.2, calculating the evaluation values of all the use cases to be sorted in sequence, and if all the use case evaluation values are 0, turning to step 3.4; if the multiple use cases have the maximum evaluation value, turning to the step 3.5; otherwise, turning to step 3.6;
step 3.4, if all the test case evaluation values to be sequenced in a certain round are 0, selecting one test case to be sequenced according to the step 3.1, adding the test case to be sequenced into the sequenced case set, deleting the test case from the sequenced case set, and resetting the combined coverage condition;
step 3.5, if a certain round of multiple cases have the same maximum evaluation value, selecting a case with the maximum evaluation value according to the step 3.1, adding the case into the ordered case set, and deleting the case from the case set to be ordered;
step 3.6, selecting the use case with the maximum evaluation value from the use case set to be tested, adding the ordered use case set and deleting the use case from the use case set to be ordered;
step 3.7, repeatedly executing the steps 3.2-3.6, ending the sorting and outputting the sorted case sequence if no candidate test case exists;
when calculating the evaluation value of each test case in the step 3.2, the specific method is as follows when calculating the evaluation value of the function coverage granularity:
for the program P to be tested, m function units MC { MC are provided 1 ,mc 2 ,mc 3 ,…mc m The coverage granularity has only two states, mc i = {0,1}, where 0.ltoreq.i.ltoreq.m, there is a test case set T { T > with a length of n 1 ,t 2 ,t 3 ,…t n There is a set of ordered cases ST { ST } of length s 1 ,st 2 ,st 3 ,…st s Presence of test case set CT { CT } to be ordered with length c 1 ,ct 2 ,ct 3 ,…dt c Then for each use case, dimension t is set, for which granularity existsSeed coverage combinations, since each cell may take a value of 1 or 0,1 representing covered and 0 representing uncovered, there is 2 per combination t The seed may take value, then ct for the test case to be selected i Its evaluation value is d (ct i ,S)=CombSet(ct i ) U.S. Combset (ST), wherein Combset (ct) i ) Represent test case ct in t dimension i Is a combined coverage situation of (2); combSet (ST) represents the combined coverage of test case set ST in the t dimension.
2. The test case prioritization method based on code and combination coverage of claim 1, wherein the specific implementation of step 1 is as follows:
step 1.1, converting the acquired test case set into a test case script according to the acquired test case set, and outputting code coverage information of the test case set;
step 1.2, executing the test case script on the program set of the basic version to obtain a coverage information file of the test case set;
and 1.3, compiling a related analysis script, analyzing the coverage information file to generate a coverage matrix of the test case set on the basic version program set, wherein each row represents the test case, each column represents each coverage granularity of the program set, the matrix element value is 0 or 1,0 represents that the test case does not cover the granularity, and 1 represents that the test case covers the granularity.
3. The test case prioritization method based on code and combination coverage of claim 1, wherein the specific implementation of step 2 is as follows:
step 2.1, converting the acquired test case set into a corresponding test case script according to the acquired test case set, and outputting an execution result of the test case set on the program set;
step 2.2, running the test case script on the program sets of the basic version and the related iterative version respectively to obtain the output information of the test case set on each version program;
and 2.3, compiling a related script, comparing output information of the test case set on the basic version and the error iteration version, if the running results of the test case set on the basic version and the error version are the same, indicating that the test case set cannot detect the error, if the running results are different, indicating that the test case set can detect the error, and generating an error detection matrix FaultMatrix of the test case set, wherein each row represents the test case, each column represents the implanted error, the element value is 0 or 1,0 represents that the test case does not detect the error, and 1 represents that the test case detects the error.
4. The code and combination coverage based test case prioritization method of claim 1, further comprising step 4: and calculating the evaluation value of the test case sequence and carrying out statistical analysis according to the ordered test case sequence and the error detection condition of the test case set.
5. A test system implementing the code and combination coverage based test case prioritization method as in any one of claims 1-4, wherein the test system main interface includes 6 menu items: file options, algorithm parameter setting options, tested program setting options, result analysis setting options, statistical result analysis options and graphic viewing options;
the algorithm parameter setting options comprise selecting an algorithm needing to be sequenced, setting the execution times to specify the cyclic execution times of the sequencing method, and outputting whether sequencing consumes time or not;
the remaining 5 menu items represent 5 functional modules, respectively: the file option module is used for selecting the file address of the project, the tested program set contained in the file address and the tested case set information during testing; the tested program setting options are used for specifying which programs under the experimental use path, and a basic version and an iterative error version of the programs; the result analysis setting options show specific error detection conditions of the sorting strategies of the methods and generate specified evaluation values; the statistical result analysis option calculates the p value of wilcox of the method and the traditional method according to the evaluation value of the result analysis module, judges whether obvious differences exist between the methods, calculates the effect size between every two methods, and judges the quality comparison between the method and the traditional method; the graphic viewing option firstly graphically displays the evaluation value of the result analysis module, conveniently and intuitively compares the methods in a box diagram form, then performs pairwise contrast display on the result of the statistical result analysis module in a table form, and highlights the data with no significant difference of p value and effect size lower than 0.5.
CN201910302282.3A 2019-04-16 2019-04-16 Test case priority ordering method and test system based on code and combination coverage Active CN110134588B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910302282.3A CN110134588B (en) 2019-04-16 2019-04-16 Test case priority ordering method and test system based on code and combination coverage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910302282.3A CN110134588B (en) 2019-04-16 2019-04-16 Test case priority ordering method and test system based on code and combination coverage

Publications (2)

Publication Number Publication Date
CN110134588A CN110134588A (en) 2019-08-16
CN110134588B true CN110134588B (en) 2023-10-10

Family

ID=67570020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910302282.3A Active CN110134588B (en) 2019-04-16 2019-04-16 Test case priority ordering method and test system based on code and combination coverage

Country Status (1)

Country Link
CN (1) CN110134588B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110647461B (en) * 2019-08-19 2023-03-28 江苏大学 Multi-information fusion regression test case sequencing method and system
CN110502447B (en) * 2019-08-30 2022-10-25 西安邮电大学 Regression test case priority ordering method based on graph
CN110704322B (en) * 2019-09-30 2023-03-10 上海中通吉网络技术有限公司 Software testing method and system
CN111813681B (en) * 2020-07-13 2022-09-09 兴业证券股份有限公司 Dynamic case priority ordering method and device
CN111858341A (en) * 2020-07-23 2020-10-30 深圳慕智科技有限公司 Test data measurement method based on neuron coverage
CN113672506B (en) * 2021-08-06 2023-06-13 中国科学院软件研究所 Dynamic proportion test case sorting and selecting method and system based on machine learning
CN115809203B (en) * 2023-02-07 2023-04-25 杭州罗莱迪思科技股份有限公司 Dynamic nesting method and device for software test cases and application of dynamic nesting method and device
CN117370151B (en) * 2023-09-08 2024-03-29 中国软件评测中心(工业和信息化部软件与集成电路促进中心) Reduction and optimization method, device, medium and equipment for test case execution
CN117520211A (en) * 2024-01-08 2024-02-06 江西财经大学 Random combination test case generation method and system based on multidimensional coverage matrix
CN117806981B (en) * 2024-03-01 2024-05-07 中国空气动力研究与发展中心计算空气动力研究所 CFD software automatic testing method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102253889A (en) * 2011-08-07 2011-11-23 南京大学 Method for dividing priorities of test cases in regression test based on distribution
CN102368226A (en) * 2011-10-10 2012-03-07 南京大学 Method for automatically generating test cases based on analysis on feasible paths of EFSM (extended finite state machine)
CN105446885A (en) * 2015-12-28 2016-03-30 西南大学 Regression testing case priority ranking technology based on needs
CN106776351A (en) * 2017-03-09 2017-05-31 浙江理工大学 A kind of combined test use-case prioritization method based on One test at a time strategies
CN106776311A (en) * 2016-12-09 2017-05-31 华北计算技术研究所 A kind of software interface test data auto generation method
CN107766245A (en) * 2017-10-18 2018-03-06 浙江理工大学 The online sort method of variable dynamics combined test use-case priority based on OTT strategies

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7092868B2 (en) * 2001-10-30 2006-08-15 International Business Machines Corporation Annealing harvest event testcase collection within a batch simulation farm
CN102831055B (en) * 2012-07-05 2015-04-29 陈振宇 Test case selection method based on weighting attribute
CN106598850B (en) * 2016-12-03 2019-05-28 浙江理工大学 A kind of location of mistake method based on program failure clustering

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102253889A (en) * 2011-08-07 2011-11-23 南京大学 Method for dividing priorities of test cases in regression test based on distribution
CN102368226A (en) * 2011-10-10 2012-03-07 南京大学 Method for automatically generating test cases based on analysis on feasible paths of EFSM (extended finite state machine)
CN105446885A (en) * 2015-12-28 2016-03-30 西南大学 Regression testing case priority ranking technology based on needs
CN106776311A (en) * 2016-12-09 2017-05-31 华北计算技术研究所 A kind of software interface test data auto generation method
CN106776351A (en) * 2017-03-09 2017-05-31 浙江理工大学 A kind of combined test use-case prioritization method based on One test at a time strategies
CN107766245A (en) * 2017-10-18 2018-03-06 浙江理工大学 The online sort method of variable dynamics combined test use-case priority based on OTT strategies

Also Published As

Publication number Publication date
CN110134588A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
CN110134588B (en) Test case priority ordering method and test system based on code and combination coverage
CN109783349B (en) Test case priority ranking method and system based on dynamic feedback weight
Ahmed et al. Handling constraints in combinatorial interaction testing in the presence of multi objective particle swarm and multithreading
Elberzhager et al. Reducing test effort: A systematic mapping study on existing approaches
US10606570B2 (en) Representing software with an abstract code graph
Saeed et al. The experimental applications of search-based techniques for model-based testing: Taxonomy and systematic literature review
Liu et al. An optimal mutation execution strategy for cost reduction of mutation-based fault localization
CN109710514B (en) Method and system for solving tie-breaking in test case priority sequencing
CN110083514B (en) Software test defect evaluation method and device, computer equipment and storage medium
Conrad et al. Empirically studying the role of selection operators duringsearch-based test suite prioritization
Kiran et al. A comprehensive investigation of modern test suite optimization trends, tools and techniques
Li et al. A scenario-based approach to predicting software defects using compressed C4. 5 model
Liu et al. Statement-oriented mutant reduction strategy for mutation based fault localization
Le Thi My Hanh et al. Mutation-based test data generation for simulink models using genetic algorithm and simulated annealing
Yang et al. Vuldigger: A just-in-time and cost-aware tool for digging vulnerability-contributing changes
Demiröz et al. Cost-aware combinatorial interaction testing
Chi et al. Multi-level random walk for software test suite reduction
Khamıs et al. Automatic test data generation using data flow information
CN108446213A (en) A kind of static code mass analysis method and device
CN112783775B (en) Special character input testing method and device
CN107957944B (en) User data coverage rate oriented test case automatic generation method
CN114638185A (en) Chip verification method and device and storage medium
Khan et al. An analysis of the code coverage-based greedy algorithms for test suite reduction
Qian et al. A strategy for multi-target paths coverage by improving individual information sharing
Van Nho et al. A solution for improving the effectiveness of higher order mutation testing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant