CN110134588A - A kind of priorities of test cases sort method and test macro based on code and combined covering - Google Patents
A kind of priorities of test cases sort method and test macro based on code and combined covering Download PDFInfo
- Publication number
- CN110134588A CN110134588A CN201910302282.3A CN201910302282A CN110134588A CN 110134588 A CN110134588 A CN 110134588A CN 201910302282 A CN201910302282 A CN 201910302282A CN 110134588 A CN110134588 A CN 110134588A
- Authority
- CN
- China
- Prior art keywords
- test
- case
- covering
- cases
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3684—Test management for test design, e.g. generating new test cases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3688—Test management for test execution, e.g. scheduling of test suites
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
The invention discloses a kind of priorities of test cases sort method and test macro based on code and combined covering, comprising: 1, carry out regression test, the dynamic code coverage information that test use cases are run on original program collection is obtained in version iteration;2, set of uses case is ranked up using dynamic code coverage information, i.e., is ranked up based on combined situation of the set of uses case to each unit in the coverage information and granularity of the granularity;3, test use cases are run in wrong version, operation result is compared with result on basic version procedure set, generate set of uses case error detection matrix;4, using set of uses case and the progress use-case sequence virtual value calculating of error detection matrix of having sorted, and assessment is compared with other classical ordering strategies.The present invention is directed to the limitation of existing sort algorithm, it is contemplated that the combined covering information of each statement element carries out fusion association to it, obtains sequencing information more abundant, to greatly improve the efficiency of error detection.
Description
Technical field
The invention belongs to software test fields, are related to a kind of based on the sequence of the priorities of test cases of code and combined covering
Method and test macro.
Background technique
From software development life cycle we it can be found that soft project is not only software development and programming language,
Software test occupies a very big part, especially in the test of software iterative regression, in fact, regression test is as one
Kind effective method, it is ensured that the modification of code part and the mistake that certain positions are brought to tested program, and statistics indicate that,
Regression test generally occupies 80% or so of software test budget, and occupies the half of entire software maintenance budget.Therefore
In order to reduce the expense of regression test, industry and academia propose a series of technologies based on test with maintenance, specific to wrap
Include Test Case Prioritization TCP (Test Case Prioritization), test cases selection TCS (Test Case
Selection) and test case reduces TCM (Test Case Minimization) etc..
Test Case Prioritization TCP is unquestionably a nowadays hot spot in testing research, earliest by Wong et al. in
It proposes within 1997, is executed based on traditional test case out-of-order, propose optimal inspection test case prioritizing to improve regression test
The idea of efficiency enables more the more early execution of use-case for being possible to discovery program error, so that more early discovery mistake is repaired
Multiple, criterion is that according to priority height is ranked up all programs according to certain principle, is tested.
Left and right at the beginning of being set forth in 21 century based on code coverage Test Case Prioritization, wherein Rothermel and Elbaum is taught
Et al. in 1999 to 4 founder articles have been delivered between 2002 Nian Sinian, form based on code coverage test master
Want frame.
They give the general description of Test Case Prioritization TCP problem simultaneously:
Given: the fully intermeshing collection PT of test use cases T, T, sort objective function f, domain PT, and codomain is real
Number.
Problem: T ' ∈ PT so that
Here PT represents that T is all possible to put in order, and f function is one given collating sequence of input, output
One numerical value directly proportional to ranking results, the performance results of the sequence are represented with this.
Rothermel et al. first is based on sentence covering granularity, branch's covering granularity proposes total and additional
Two kinds of greedy algorithm thoughts, and being compared with unorder, random and optimal, by practical proof total and
Superiority of the additional relative to random test.Based on paper in 1999, Elbaum et al. carried out some improvement, first
Granularity is covered relative to sentence and branch, increases function covering coarseness, it was confirmed that carry out the sort method of coarseness
Program expense can be reduced, but can also reduce sequence effect.Rothermel et al. be additionally contemplates that using software metrics technology come into
One step improves the validity of TCP technology, they answer FEP (fault exposing potential) value of test case for the first time
Use in relevant greedy algorithm, FEP value can be used for estimating the Flaw detectability of test case, calculate PIE model and
Mutation testing analysis.PIE model thinks the test case of detectable program latent defect, need to meet 3 conditions: (1) test is used
Example performs the sentence comprising defect;(2) program internal state is caused to malfunction;(3) it is influenced by propagating the internal state of mistake
To the output of program.Mutation testing is a kind of effective means for assessing test use cases adequacy, is met tested program execution
The simple code modification (i.e. mutation operator) of linguistic constraints produces a large amount of variants, when test case is in variant and original journey
When process performing in sequence is inconsistent, then the test case is claimed to can detecte the variant.By taking sentence covers as an example, quilt is given
The ms value of ranging sequence P and test case t, first computing statement s (s ∈ P), calculation formula are
Wherein, mutants (s) returns to one group of mutation operator and is applied to the variant number obtained on sentence s, killed (s,
T) quantity that test case t can detecte out in the variant of above-mentioned generation is returned, it is assumed that the sentence collection of test case t covering
For S, then the FEP value of the test case is ∑s∈Sms(s,t).Rothermel covers FEP value in statement and branch
It is tested in granularity, Elbaum et al. is then calculated in method granularity.Simultaneously because algorithm set forth above and ideal
Best sequence difference on effect is larger, and Elbaum et al. is also by FEP value and defect propensity value (fault proneness) (such as fault
Index value and Diff value) it is combined, propose a series of Diff-FEP strategies.With Diff value, FEP value and Total strategy
For, test case is ranked up according to Diff value first, then to the test case for possessing identical Diff value, according to FEP
Value is ranked up.
Jeffrey and Gupta et al. solve the problems, such as TCP by program slice, the phase that they are exported based on test case
Piece (relevant slice) deeply concerned proposes a kind of new TCP technology, can identify all possible influences using related slices
Program exports sentence or the branch of result, and the coverage condition to these sentences or branch is only considered in sequence.Elbaum etc.
People thinks, the covering to program entity and insufficient is only considered in Test Case Prioritization, they further consider in sequence
The executive overhead of test case and the extent of injury of defect.In positive research, they consider different test cases and hold
Row expense and the defect extent of injury distribution, wherein the distribution of test case executive overhead include be uniformly distributed, random distribution, normal state
The distribution be distributed, being derived from Mozilla open source projects and the distribution being derived from QTP application.And defect endangers the distribution packet of program
Include the correlation distribution for being uniformly distributed and being derived from Mozilla open source projects.Zhao Jianjun et al. passes through analysis tested program internal structure,
It is estimated that the defect tendentiousness and importance of each module, and the sequence for instructing test case.
Although researcher proposes a large amount of TCP technologies, suitable TCP technology is therefrom selected according to test scene feature
Equally merit attention.Elbaum et al. by positive research has investigated different test scenes, and (main investigation factor includes by ranging
Sequence characteristics, test case feature and code revision type etc.) influence to TCP technical validity.Empirical result is tester
Suitable TCP technology is selected under different test scenes provides important evidence.Arafeen et al. thinks, in software continuous
In evolutionary process, code revision type when developing every time has influence to the selection of TCP technology, and this is modeled as by they
Multiple criteria decision making (MCDM) (multiple criteria decision making) problem, and use analytic hierarchy process (AHP) (analytical
Hierarchy process, abbreviation AHP) it is selected.The result shows that the method that they propose can be each regression test
TCP technology the most cost-effective is selected in activity.
TCP technology is based primarily upon source code, historical execution information, demand and model etc. at present, is specifically divided into base
In code reordering technology, based on searching order technology, based on demand ordering techniques, based on history ordering techniques etc..From former years
For correlative theses quantity, mainstream is remained based on code reordering technology, while based on searching order increasingly by research people
Concern, this method are based primarily upon code coverage information and improve to existing Test Case Prioritization technology.
Summary of the invention
In order to be more efficiently ranked up according to dynamic code coverage information to test case, the invention proposes
A kind of priorities of test cases sort method and test macro based on code and combined covering.In addition.The present invention is gone back and other
Classical Test Case Prioritization method compare, demonstrate the validity and advance of proposition method.Test of the invention
The technical solution of use-case prioritization method includes the following steps:
Step 1, according to test use cases, operating condition, collection test use cases code cover on the procedure set of basic version
Lid information;
Step 2, test use cases are run in iteration wrong version, obtains mistake of the test use cases in each version
Error detection situation;
Step 3, using code and combined covering sort method is based on, test use is carried out according to the code coverage information of acquisition
Example sequence, exports ordering test case sequence;
Step 4, according to the error detection situation of ordering test case sequence and test use cases, test use is calculated
The assessed value of example sequence is simultaneously for statistical analysis.
Specific step is as follows for above-mentioned steps 1:
Step 1.1, according to acquired test use cases, it is converted into test case script, the use-case can be exported
Collect code coverage information;
Step 1.2, to the procedure set of basic version, above-mentioned test case script is executed, the test use cases are obtained
Coverage information file;
Step 1.3, correlation analysis script is write, above-mentioned coverage information file is parsed, test use cases is generated and exists
Set covering theory CoverageMatrix on basic version procedure set, wherein each row represent test case, and each column represents the program
Collect every kind of covering granularity, matrix element value represents the test case for 0 or 1,0 and do not cover the granularity, and 1 represents test use
Example covers the granularity.
Specific step is as follows for above-mentioned steps 2:
Step 2.1, according to acquired test use cases, it is converted into corresponding test case script, can be exported
Implementing result of the set of uses case on procedure set;
Step 2.2, above-mentioned test case script is separately operable on procedure set of the basic version with associated iteration version,
Obtain output information of the test use cases on each version program;
Step 2.3, associated script is write, to output information of the test use cases on basic version and wrong iteration version
It compares, shows that the use-case can not detect the mistake if basic version is identical with operation result in wrong version if use-case
Accidentally, show that the use-case can detecte the mistake if operation result difference, the error detection square of test use cases is generated with this
Battle array FaultMatrix, wherein each row represent test case, and each column represents the mistake of implantation, and element value is 0 or 1,0 representative
The mistake is not detected in the test case, and 1, which represents the test case, detects the mistake.
Specific step is as follows for above-mentioned steps 3:
Step 3.1, first test case is selected first, treats sequence set of uses case, according to the coverage information of each use-case,
The most test case of selection covering granularity unit, multiple use-cases meet condition and then randomly choose if it exists.
Step 3.2, combined covering dimension is set, and calculates the assessed value of each test case according to set dimension.
By taking function covers granularity as an example, ranging sequence P is treated, if there are m function unit MC { mc1,mc2,mc3,…mcm, cover granularity
Only two states, mci={ 0,1 } (0≤i≤m), there are the test use cases T { t that length is n1,t2,t3,…tn, there are length
Degree is the set of uses case ST of the sequence { st of s1,st2,st3,…sts, there are the test use cases CT { ct to be sorted that length is c1,
ct2,ct3,…dtc, then each use-case is set dimension as t, is existed to the granularityKind covering combination, due to each list
First possible value is 1 (capped) or 0 (not covering), therefore there are 2 for each combinationtThe possible value of kind, then for be selected
Test case cti, assessed value is d (cti, S) and=CombSet (cti) ∪ CombSet (ST), wherein CombSet (cti) represent
Test case ct under t dimensioniCombined covering situation;CombSet (ST) represents the combined covering of test use cases ST under t dimension
Situation.
Step 3.3, according to step 3.2, the assessed value for the use-case that needs to be sorted successively is calculated, if all use-case assessed values are equal
It is 0, goes to step 3.4;If multiple use-cases have maximum assessed value, 3.5 are gone to step;Otherwise 3.6 are gone to step.
Step 3.4, if a certain wheel needs to be sorted, test case assessed value is 0, foundation step 3.1 select one to
Sort test case, is added into have sorted and set of uses case and delete from wait the set of uses case that sorts, and combined covering situation is reset.
Step 3.5, if a certain multiple use-cases of wheel possess identical maximum assessed value, one is selected according to step 3.1
The use-case of maximum assessed value is added into have sorted and set of uses case and delete from wait the set of uses case that sorts;
Step 3.6, the use-case with maximum assessed value is selected from set of uses case when test to be selected, and the set of uses case that sorted is added
And it is deleted from wait the set of uses case that sorts;
Step 3.7, above-mentioned steps 3.2-3.6 is repeated, if when without candidate test case, terminate this minor sort, and
Export the use-case sequence that sorted.
Specific step is as follows for above-mentioned steps 4:
Step 4.1, it is combined using the one-dimensional covering of this method, step 3 is repeated in function (function) granularity, generated
The test case sequence of sequence of this method;
Step 4.2, right in function (function) granularity using greedy algorithm, i.e. Total and Additional strategy
Test case is ranked up, and generates the test case sequence of sequence of each method;
Step 4.3, according to the wrong set covering theory of sorted test case sequence and the test use cases of each method
FaultMatrix generates the assessed value of the ranking results of each method, here with average error detection rates APFD (average
Percentage of fault detection) for, assessed value formula are as follows: Wherein n represents test case quantity, m represents number of errors in program, TFiRepresentative detects i-th
The execution order of first test case of mistake in the ranking;
Step 4.4, display is patterned to the APFD value of acquisition, draws box figure of each method under every kind of granularity,
Convenient intuitive display is compared with;
Step 4.5, for statistical analysis to the APFD value of acquisition, calculate the Wilcoxen order of this method and conventional method
With the p value (wilcox p-value) of inspection and calculate effect quantity (effect size), and according to the above results to we
Method is assessed.
Further, the present invention also designs the priorities of test cases sort method original realized based on code and combined covering
Type test macro;The test macro main interface includes 6 menu items: file option, algorithm parameter setting options, tested program
Setting options, interpretation of result setting options, analysis of statistical results option and figure check option;
Wherein algorithm parameter setting options need the algorithm that sorts including selection, such as Total ordering strategy,
Additional ordering strategy and this method ordering strategy execute number setting and need to specify sort method circulation execution time
Number, and whether export sequence elapsed time etc.;
Remaining 5 menu item respectively represents 5 functional modules: file option module is used for options purpose file address,
The tested program collection that the inside includes, the information such as set of uses case when test;Tested program setting options are for specifying the experiment to use road
Which program and its basic version and iteration wrong version under diameter;Interpretation of result setting options can show each method
The specific error detection situation of ordering strategy, and specified assessed value is generated, such as APFD value;Analysis of statistical results option according to
The assessed value of interpretation of result module calculates the p value of the wilcox of this method and conventional method, judges to whether there is between each method
Significant difference, while the effect size between method two-by-two is calculated, judge the odds between this method and conventional method
Compared with;Figure checks that the assessed value (such as APFD value) of interpretation of result module can be patterned display first by option, mainly with
The form of box figure easily and intuitively more each method, later carries out the result of analysis of statistical results module in table form
Comparison display two-by-two, the data to p value without significant difference and effect size lower than 0.5 are highlighted.
Beneficial effects of the present invention:
1, the present invention is realized is sorted based on the priorities of test cases of code and combined covering, main according to test case
To the combined situation between each granularity unit of dynamic coverage information and program of program, dynamic obtained is further excavated
Coverage information, use original test data more abundant, to substantially increase the rate of test use cases detection mistake.
2, the present invention is based under function granularity, having used Siemens and Unix procedure set, and with Total,
Additional classics ordering strategy is compared, and superior performance of the present invention under different scales program is demonstrated.
3, design realizes one based on the priorities of test cases of code and combined covering sequence prototype system, the system
Automatic test is realized well, improves testing efficiency, can be played a significant role in Test Case Prioritization field.
4, the present invention obtains the dynamic coverage information of designated size, extracts wherein isolated unit and is combined, using more
Add related information between unit abundant to be ranked up, significantly improve the efficiency of test case discovery mistake, reduces software survey
Time overhead and manpower costs in examination.Before i.e. the sequence of every wheel starts, according to the combined covering situation of the granularity, by this again in terms of
The assessed value for calculating use-case to be sorted selects maximum assessed value use-case and the set of uses case that sorted is added.
Detailed description of the invention
Fig. 1 is the priorities of test cases sort method flow chart based on code and combined covering.
Fig. 2 is to collect code coverage information flow chart.
Fig. 3 is to collect Error Matrix flow chart.
Fig. 4 is based on code and combined covering sequence flow chart.
Fig. 5 is graphic display interface figure.
Fig. 6 is statistical analysis surface chart.
Specific embodiment
The invention will be further described with case study on implementation with reference to the accompanying drawing, it is noted that described implementation case
Example is intended merely to facilitate the understanding of the present invention, and does not play any restriction effect to it.
The present invention for the purpose of solving the problems, such as to carry out Test Case Prioritization based on code coverage, provide it is a kind of based on code and
The priorities of test cases sort method of combined covering effectively improves the effect that test use cases find mistake in testing
Rate provides perfect test frame and algorithm, and carried out sufficient experiment, it was demonstrated that the feasibility of method and effectively
Property.
Firstly, it is as follows to define the several concept definitions involved in the present invention arrived.
Define 1 test case: test case be the one group of test input worked out to ensure certain target, execution condition with
And expected results.
Define 2 Test Case Prioritizations: given test use cases T, the fully intermeshing collection PT of T, sort objective function f, definition
Domain is PT, and codomain is real number, T ' ∈ PT, so that
Here PT represents that T is all possible to put in order, and f function is one given collating sequence of input, output
One numerical value directly proportional to ranking results, the performance results of the sequence are represented with this.
Define 3 code coverages: code coverage refers in the total code needed to be implemented in testing that the code executed is to it
Accounting degree.
Define the covering of 4 sentences: the code line that is otherwise known as covering, section covering and basic block covering, are one of the most common type generations
Code covering method, is mainly used for the degree that each executable statement has executed in the source code of measuring procedure.
It defines the covering of 5 branches: also known as determining covering.It has been mainly used in measuring procedure each reachable judgement branch
The degree executed.
Define 6 Condition Coverage Testings: be mainly used in measuring procedure the result true of all subexpressions of each judgement and
False has tested degree.
It defines the covering of 7 paths: also known as asserting covering, each branch for being mainly used for function in measuring procedure is held
Capable degree because all possible branch is carried out one time, when facing has multiple branch's nestings, need to multiple branches into
Row permutation and combination, so its test path can increase with the quantitative indicator rank of branch.
As shown in Figure 1, a kind of priorities of test cases sort method based on code and combined covering of the invention, packet
It includes:
Step 1, according to test use cases, operating condition, collection test use cases code cover on the procedure set of basic version
Lid information.
In above-mentioned steps 1, referring to Fig. 2, program code coverage information collection step is as follows:
Step 1.1, according to acquired test use cases, it is converted into test case script, the use-case can be exported
Collect code coverage information;
Step 1.2, to the procedure set of basic version, above-mentioned test case script is executed, the test use cases are obtained
Coverage information file;
Step 1.3, correlation analysis script is write, above-mentioned coverage information file is parsed, test use cases is generated and exists
Set covering theory CoverageMatrix on basic version procedure set, wherein each row represent test case, and each column represents the program
Collect every kind of covering granularity, matrix element value represents the test case for 0 or 1,0 and do not cover the granularity, and 1 represents test use
Example covers the granularity.
Step 2, test use cases are run in iteration wrong version, obtains mistake of the test use cases in each version
Error detection situation.
In above-mentioned steps 2, referring to Fig. 3, the use-case Error Matrix generation step based on designated program collection is as follows:
Step 2.1, according to acquired test use cases, it is converted into corresponding test case script, can be exported
Implementing result of the set of uses case on procedure set;
Step 2.2, above-mentioned test case script is separately operable on procedure set of the basic version with associated iteration version,
Obtain output information of the test use cases on each version program;
Step 2.3, associated script is write, to output information of the test use cases on basic version and wrong iteration version
It compares, shows that the use-case can not detect the mistake if basic version is identical with operation result in wrong version if use-case
Accidentally, show that the use-case can detecte the mistake if operation result difference, the error detection square of test use cases is generated with this
Battle array FaultMatrix, wherein each row represent test case, and each column represents the mistake of implantation, and element value is 0 or 1,0 representative
The mistake is not detected in the test case, and 1, which represents the test case, detects the mistake.
Step 3, using combined covering sort method, Test Case Prioritization is carried out according to the code coverage information of acquisition, it is defeated
Ordering test case sequence out.
In above-mentioned steps 3, referring to Fig. 4, steps are as follows for set of uses case sort method:
Step 3.1, first test case is selected first, treats sequence set of uses case, according to the coverage information of each use-case,
The most test case of selection covering granularity unit, multiple use-cases meet condition and then randomly choose if it exists.
Step 3.2, combined covering dimension is set, and calculates the assessed value of each test case according to set dimension.
By taking function covers granularity as an example, to for program P to be measured, if there are m function unit MC { mc1,mc2,mc3,…mcm, cover grain
Degree only has two states, mci={ 0,1 } (0≤i≤m), there are the test use cases T { t that length is n1,t2,t3,…tn, exist
Length is the set of uses case ST of the sequence { st of s1,st2,st3,…sts, there are the test use cases CT { ct to be sorted that length is c1,
ct2,ct3,…dtc, then each use-case is set dimension as t, is existed to the granularityKind covering combination, due to each list
First possible value is 1 (capped) or 0 (not covering), therefore there are 2 for each combinationtThe possible value of kind, then for be selected
Test case cti, assessed value is d (cti, S) and=CombSet (cti) ∪ CombSet (ST), wherein CombSet (cti) represent
Test case ct under t dimensioniCombined covering situation;CombSet (ST) represents the combined covering of test use cases ST under t dimension
Situation.
Step 3.3, according to step 3.2, the assessed value for the use-case that needs to be sorted successively is calculated, if all use-case assessed values are equal
It is 0, goes to step 3.4;If multiple use-cases have maximum assessed value, 3.5 are gone to step;Otherwise 3.6 are gone to step.
Step 3.4, if a certain wheel needs to be sorted, test case assessed value is 0, foundation step 3.1 select one to
Sort test case, is added into have sorted and set of uses case and delete from wait the set of uses case that sorts, and combined covering situation is reset.
Step 3.5, if a certain multiple use-cases of wheel possess identical maximum assessed value, one is selected according to step 3.1
The use-case of maximum assessed value is added into have sorted and set of uses case and delete from wait the set of uses case that sorts;
Step 3.6, the use-case with maximum assessed value is selected from set of uses case when test to be selected, and the set of uses case that sorted is added
And it is deleted from wait the set of uses case that sorts
Step 3.7, above-mentioned steps 3.2-3.6 is repeated, if when without candidate test case, terminate this minor sort, and
Export the use-case sequence that sorted.
Step 4, according to the error detection situation of ordering test case sequence and test use cases, the sequence is calculated
Assessed value is simultaneously for statistical analysis.
In above-mentioned steps 4, it is as follows that sort method assesses data generating step:
Step 4.1, it is combined using the one-dimensional covering of this method, step 3 is repeated in function (function) granularity, generated
The test case sequence of sequence of this method;
Step 4.2, right in function (function) granularity using greedy algorithm, i.e. Total and Additional strategy
Test case is ranked up, and generates the test case sequence of sequence of each method;
Step 4.3, according to the wrong set covering theory of sorted test case sequence and the test use cases of each method
FaultMatrix generates the assessed value of the ranking results of each method, here with average error detection rates APFD (average
Percentage of fault detection) for, assessed value formula are as follows: Wherein n represents test case quantity, m represents number of errors in program, TFiRepresentative detects i-th
The execution order of first test case of mistake in the ranking;
Step 4.4, display is patterned to the APFD value of acquisition, draws box figure of each method under every kind of granularity,
Convenient intuitive display is compared with;
Step 4.5, for statistical analysis to the APFD value of acquisition, calculate the Wilcoxen order of this method and conventional method
With the p value (wilcox p-value) of inspection and calculate effect quantity (effect size), and according to the above results to we
Method is assessed.
The present invention also designs the priorities of test cases sort method prototype test realized based on code and combined covering
System;The test macro main interface includes 6 menu items: file option, algorithm parameter setting options, tested program setting choosing
Item, interpretation of result setting options, analysis of statistical results option and figure check option;
Wherein algorithm parameter setting options need the algorithm that sorts including selection, such as Total ordering strategy,
Additional ordering strategy and this method ordering strategy execute number setting and need to specify sort method circulation execution time
Number, and whether export sequence elapsed time etc.;
Remaining 5 menu item respectively represents 5 functional modules: file option module is used for options purpose file address,
The tested program collection that the inside includes, the information such as set of uses case when test;Tested program setting options are for specifying the experiment to use road
Which program and its basic version and iteration wrong version under diameter;Interpretation of result setting options can show each method
The specific error detection situation of ordering strategy, and specified assessed value is generated, such as APFD value;Analysis of statistical results option according to
The assessed value of interpretation of result module calculates the p value of the wilcox of this method and conventional method, judges to whether there is between each method
Significant difference, while the effect size between method two-by-two is calculated, judge the odds between this method and conventional method
Compared with;Figure checks that the assessed value (such as APFD value) of interpretation of result module can be patterned display first by option, mainly with
The form of box figure easily and intuitively more each method, as shown in figure 5, abscissa represents each method here, wherein Add,
Tot, F1, F2 respectively indicate Additional strategy, Total strategy and the reality based on 1 peacekeeping 2 dimension combined covering of the invention
Existing, ordinate then represents the APFD value of each method;The result of analysis of statistical results module is carried out in table form later
Comparison display two-by-two, as shown in fig. 6, the data to p value there are significant difference and effect size higher than 0.5 highlight
Display.
The series of detailed descriptions listed above only for feasible embodiment of the invention specifically
Protection scope bright, that they are not intended to limit the invention, it is all without departing from equivalent implementations made by technical spirit of the present invention
Or change should all be included in the protection scope of the present invention.
Claims (8)
1. a kind of priorities of test cases sort method based on code and combined covering, which comprises the steps of:
Step 1, according to test use cases, operating condition, collection test use cases code coverage are believed on the procedure set of basic version
Breath;
Step 2, test use cases are run in iteration wrong version, obtains mistake inspection of the test use cases in each version
Survey situation;
Step 3, using code and combined covering sort method is based on, test case row is carried out according to the code coverage information of acquisition
Sequence exports ordering test case sequence.
2. a kind of priorities of test cases sort method based on code and combined covering according to claim 1, special
Sign is, the specific implementation of the step 1:
Step 1.1, according to acquired test use cases, it is converted into test case script, the set of uses case generation can be exported
Code coverage information;
Step 1.2, to the procedure set of basic version, above-mentioned test case script is executed, the covering of the test use cases is obtained
Message file;
Step 1.3, correlation analysis script is write, above-mentioned coverage information file is parsed, generates test use cases on basis
Set covering theory CoverageMatrix on version program collection, wherein each row represent test case, and it is every that each column represents the procedure set
Kind covering granularity, matrix element value represent the test case for 0 or 1,0 and do not cover the granularity, and 1, which represents the test case, covers
Cover the granularity.
3. a kind of priorities of test cases sort method based on code and combined covering according to claim 1, special
Sign is, the specific implementation of the step 2:
Step 2.1, according to acquired test use cases, it is converted into corresponding test case script, the use can be exported
Implementing result of the example collection on procedure set;
Step 2.2, above-mentioned test case script is separately operable on procedure set of the basic version with associated iteration version, is obtained
Output information of the test use cases on each version program;
Step 2.3, associated script is write, output information of the test use cases on basic version and wrong iteration version is carried out
Comparison, shows that the use-case can not detect the mistake if basic version is identical with operation result in wrong version if use-case,
Show that the use-case can detecte the mistake if operation result difference.The error detection matrix of test use cases is generated with this
FaultMatrix, wherein each row represent test case, and each column represents the mistake of implantation, and element value is that 0 or 1,0 representative should
The mistake is not detected in test case, and 1, which represents the test case, detects the mistake.
4. a kind of priorities of test cases sort method based on code and combined covering according to claim 1, special
Sign is, the specific implementation of the step 3:
Step 3.1, first test case is selected first, treats sequence set of uses case, according to the coverage information of each use-case, selection
The most test case of granularity unit is covered, multiple use-cases meet condition and then randomly choose if it exists;
Step 3.2, combined covering dimension is set, and calculates the assessed value of each test case according to set dimension;
Step 3.3, according to step 3.2, the assessed value for the use-case that needs to be sorted successively is calculated, if all use-case assessed values are 0,
Go to step 3.4;If multiple use-cases have maximum assessed value, 3.5 are gone to step;Otherwise 3.6 are gone to step;
Step 3.4, if a certain test case assessed value that needs to be sorted of taking turns is 0, one is selected wait sort according to step 3.1
Test case is added into have sorted and set of uses case and delete from wait the set of uses case that sorts, and combined covering situation is reset;
Step 3.5, if a certain multiple use-cases of wheel possess identical maximum assessed value, a maximum is selected according to step 3.1
The use-case of assessed value is added into have sorted and set of uses case and delete from wait the set of uses case that sorts;
Step 3.6, select the use-case with maximum assessed value from set of uses case when test to be selected, addition sorted set of uses case and from
Wait be deleted in the set of uses case that sorts;
Step 3.7, above-mentioned steps 3.2-3.6 is repeated, if when without candidate test case, terminating this minor sort, and export
Sorted use-case sequence.
5. a kind of priorities of test cases sort method based on code and combined covering according to claim 4, special
Sign is, when calculating the assessed value of each test case in the step 3.2, when calculating the assessed value of function covering granularity,
The specific method is as follows:
Ranging sequence P is treated, if there are m function unit MC { mc1, mc2, mc3... mcm, covering granularity only has two states, mci
={ 0,1 } (0≤i≤m), there are the test use cases T { t that length is n1, t2, t3... tn, there are the sequence that length is s use
Example collection ST { st1, st2, st3... sts, there are the test use cases CT { ct to be sorted that length is c1, ct2, ct3... dtc, that
To each use-case, dimension is set as t, is existed to the granularityKind covering combination, since the possible value of each unit is 1 (quilt
Covering) or 0 (not covering), therefore there are 2 for each combinationtThe possible value of kind, then for test case ct to be selectedi, assessment
Value is d (cti, S) and=CombSet (cti) ∪ CombSet (ST), wherein CombSet (cti) represent test case ct under t dimensioni
Combined covering situation;CombSet (ST) represents the combined covering situation of test use cases ST under t dimension.
6. a kind of priorities of test cases sort method based on code and combined covering according to claim 1, special
Sign is, further includes step 4: according to the error detection situation of ordering test case sequence and test use cases, calculating should
The assessed value of test case sequence is simultaneously for statistical analysis.
7. a kind of priorities of test cases sort method based on code and combined covering according to claim 6, special
Sign is, the specific implementation of the step 4:
Step 4.1, it is combined using the one-dimensional covering of this method, step 3 is repeated in function (function) granularity, generate the party
The test case sequence of sequence of method;
Step 4.2, using greedy algorithm, i.e. Total and Additional strategy, to test in function (function) granularity
Use-case is ranked up, and generates the test case sequence of sequence of each method;
Step 4.3, according to the wrong set covering theory of sorted test case sequence and the test use cases of each method
FaultMatrix generates the assessed value of the ranking results of each method, here with average error detection rates APFD (average
Percentage of fault detection) for, assessed value formula are as follows:
Wherein n represents test case quantity, m represents number of errors in program, TFiRepresent first test for detecting i-th of mistake
The execution order of use-case in the ranking;
Step 4.4, display is patterned to the APFD value of acquisition, draws box figure of each method under every kind of granularity, it is convenient
Intuitive display is compared with;
Step 4.5, for statistical analysis to the APFD value of acquisition, calculate the Wilcoxen sum of ranks inspection of this method and conventional method
The p value (wilcox p-value) and calculating effect quantity (effect size) tested, and assessed according to the above results.
8. a kind of priorities of test cases sequence side realized as described in claim any one of 1-7 based on code and combined covering
The test macro of method, which is characterized in that the test macro main interface includes 6 menu items: file option, algorithm parameter setting
Option, tested program setting options, interpretation of result setting options, analysis of statistical results option and figure check option;
Wherein algorithm parameter setting options need the algorithm to sort, such as Total ordering strategy, Additional including selection
Ordering strategy and this method ordering strategy execute number setting and need that sort method circulation is specified to execute number, Yi Jishi
No output sequence elapsed time etc.;
Remaining 5 menu item respectively represents 5 functional modules: file option module is used for options purpose file address, the inside
The tested program collection for including, the information such as set of uses case when test;Tested program setting options are for specifying the experiment to use under path
Which program and its basic version and iteration wrong version;Interpretation of result setting options can show the row of each method
The specific error detection situation of sequence strategy, and specified assessed value is generated, such as APFD value;Analysis of statistical results option is according to result
The assessed value of analysis module calculates the p value of the wilcox of this method and conventional method, judges between each method with the presence or absence of significant
Difference, while the effect size between method two-by-two is calculated, compared with judging the superiority and inferiority between this method and conventional method;Figure
Shape checks that the assessed value (such as APFD value) of interpretation of result module can be patterned display first by option, mainly with box figure
Form easily and intuitively more each method, it is right two-by-two later to carry out the result of analysis of statistical results module in table form
Than display, the data to p value without significant difference and effect size lower than 0.5 are highlighted.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910302282.3A CN110134588B (en) | 2019-04-16 | 2019-04-16 | Test case priority ordering method and test system based on code and combination coverage |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910302282.3A CN110134588B (en) | 2019-04-16 | 2019-04-16 | Test case priority ordering method and test system based on code and combination coverage |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110134588A true CN110134588A (en) | 2019-08-16 |
CN110134588B CN110134588B (en) | 2023-10-10 |
Family
ID=67570020
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910302282.3A Active CN110134588B (en) | 2019-04-16 | 2019-04-16 | Test case priority ordering method and test system based on code and combination coverage |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110134588B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110502447A (en) * | 2019-08-30 | 2019-11-26 | 西安邮电大学 | A kind of regression test case priority ordering method based on figure |
CN110647461A (en) * | 2019-08-19 | 2020-01-03 | 江苏大学 | Multi-information fusion regression test case sequencing method and system |
CN110704322A (en) * | 2019-09-30 | 2020-01-17 | 上海中通吉网络技术有限公司 | Software testing method and system |
CN111813681A (en) * | 2020-07-13 | 2020-10-23 | 兴业证券股份有限公司 | Dynamic case priority ordering method and device |
CN111858341A (en) * | 2020-07-23 | 2020-10-30 | 深圳慕智科技有限公司 | Test data measurement method based on neuron coverage |
CN113672506A (en) * | 2021-08-06 | 2021-11-19 | 中国科学院软件研究所 | Dynamic proportion test case sequencing selection method and system based on machine learning |
CN114706769A (en) * | 2022-03-30 | 2022-07-05 | 天津大学 | Log-based regression test-oriented black box test case sequencing method |
CN115809203A (en) * | 2023-02-07 | 2023-03-17 | 杭州罗莱迪思科技股份有限公司 | Software test case dynamic nesting method, device and application thereof |
CN117370151A (en) * | 2023-09-08 | 2024-01-09 | 中国软件评测中心(工业和信息化部软件与集成电路促进中心) | Reduction and optimization method, device, medium and equipment for test case execution |
CN117520211A (en) * | 2024-01-08 | 2024-02-06 | 江西财经大学 | Random combination test case generation method and system based on multidimensional coverage matrix |
CN117806981A (en) * | 2024-03-01 | 2024-04-02 | 中国空气动力研究与发展中心计算空气动力研究所 | CFD software automatic testing method and system |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030101041A1 (en) * | 2001-10-30 | 2003-05-29 | International Business Machines Corporation | Annealing harvest event testcase collection within a batch simulation farm |
CN102253889A (en) * | 2011-08-07 | 2011-11-23 | 南京大学 | Method for dividing priorities of test cases in regression test based on distribution |
CN102368226A (en) * | 2011-10-10 | 2012-03-07 | 南京大学 | Method for automatically generating test cases based on analysis on feasible paths of EFSM (extended finite state machine) |
CN102831055A (en) * | 2012-07-05 | 2012-12-19 | 陈振宇 | Test case selection method based on weighting attribute |
CN105446885A (en) * | 2015-12-28 | 2016-03-30 | 西南大学 | Regression testing case priority ranking technology based on needs |
CN106598850A (en) * | 2016-12-03 | 2017-04-26 | 浙江理工大学 | Error locating method based on program failure clustering analysis |
CN106776351A (en) * | 2017-03-09 | 2017-05-31 | 浙江理工大学 | A kind of combined test use-case prioritization method based on One test at a time strategies |
CN106776311A (en) * | 2016-12-09 | 2017-05-31 | 华北计算技术研究所 | A kind of software interface test data auto generation method |
CN107766245A (en) * | 2017-10-18 | 2018-03-06 | 浙江理工大学 | The online sort method of variable dynamics combined test use-case priority based on OTT strategies |
-
2019
- 2019-04-16 CN CN201910302282.3A patent/CN110134588B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030101041A1 (en) * | 2001-10-30 | 2003-05-29 | International Business Machines Corporation | Annealing harvest event testcase collection within a batch simulation farm |
CN102253889A (en) * | 2011-08-07 | 2011-11-23 | 南京大学 | Method for dividing priorities of test cases in regression test based on distribution |
CN102368226A (en) * | 2011-10-10 | 2012-03-07 | 南京大学 | Method for automatically generating test cases based on analysis on feasible paths of EFSM (extended finite state machine) |
CN102831055A (en) * | 2012-07-05 | 2012-12-19 | 陈振宇 | Test case selection method based on weighting attribute |
CN105446885A (en) * | 2015-12-28 | 2016-03-30 | 西南大学 | Regression testing case priority ranking technology based on needs |
CN106598850A (en) * | 2016-12-03 | 2017-04-26 | 浙江理工大学 | Error locating method based on program failure clustering analysis |
CN106776311A (en) * | 2016-12-09 | 2017-05-31 | 华北计算技术研究所 | A kind of software interface test data auto generation method |
CN106776351A (en) * | 2017-03-09 | 2017-05-31 | 浙江理工大学 | A kind of combined test use-case prioritization method based on One test at a time strategies |
CN107766245A (en) * | 2017-10-18 | 2018-03-06 | 浙江理工大学 | The online sort method of variable dynamics combined test use-case priority based on OTT strategies |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110647461A (en) * | 2019-08-19 | 2020-01-03 | 江苏大学 | Multi-information fusion regression test case sequencing method and system |
CN110502447A (en) * | 2019-08-30 | 2019-11-26 | 西安邮电大学 | A kind of regression test case priority ordering method based on figure |
CN110502447B (en) * | 2019-08-30 | 2022-10-25 | 西安邮电大学 | Regression test case priority ordering method based on graph |
CN110704322A (en) * | 2019-09-30 | 2020-01-17 | 上海中通吉网络技术有限公司 | Software testing method and system |
CN110704322B (en) * | 2019-09-30 | 2023-03-10 | 上海中通吉网络技术有限公司 | Software testing method and system |
CN111813681B (en) * | 2020-07-13 | 2022-09-09 | 兴业证券股份有限公司 | Dynamic case priority ordering method and device |
CN111813681A (en) * | 2020-07-13 | 2020-10-23 | 兴业证券股份有限公司 | Dynamic case priority ordering method and device |
CN111858341A (en) * | 2020-07-23 | 2020-10-30 | 深圳慕智科技有限公司 | Test data measurement method based on neuron coverage |
CN113672506A (en) * | 2021-08-06 | 2021-11-19 | 中国科学院软件研究所 | Dynamic proportion test case sequencing selection method and system based on machine learning |
CN114706769A (en) * | 2022-03-30 | 2022-07-05 | 天津大学 | Log-based regression test-oriented black box test case sequencing method |
CN115809203A (en) * | 2023-02-07 | 2023-03-17 | 杭州罗莱迪思科技股份有限公司 | Software test case dynamic nesting method, device and application thereof |
CN115809203B (en) * | 2023-02-07 | 2023-04-25 | 杭州罗莱迪思科技股份有限公司 | Dynamic nesting method and device for software test cases and application of dynamic nesting method and device |
CN117370151A (en) * | 2023-09-08 | 2024-01-09 | 中国软件评测中心(工业和信息化部软件与集成电路促进中心) | Reduction and optimization method, device, medium and equipment for test case execution |
CN117370151B (en) * | 2023-09-08 | 2024-03-29 | 中国软件评测中心(工业和信息化部软件与集成电路促进中心) | Reduction and optimization method, device, medium and equipment for test case execution |
CN117520211A (en) * | 2024-01-08 | 2024-02-06 | 江西财经大学 | Random combination test case generation method and system based on multidimensional coverage matrix |
CN117806981A (en) * | 2024-03-01 | 2024-04-02 | 中国空气动力研究与发展中心计算空气动力研究所 | CFD software automatic testing method and system |
CN117806981B (en) * | 2024-03-01 | 2024-05-07 | 中国空气动力研究与发展中心计算空气动力研究所 | CFD software automatic testing method and system |
Also Published As
Publication number | Publication date |
---|---|
CN110134588B (en) | 2023-10-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110134588A (en) | A kind of priorities of test cases sort method and test macro based on code and combined covering | |
CN109783349A (en) | A kind of priorities of test cases sort method and system based on dynamical feedback weight | |
Lombardi et al. | Policy decision support for renewables deployment through spatially explicit practically optimal alternatives | |
Chen et al. | MULTI: Multi-objective effort-aware just-in-time software defect prediction | |
Zakari et al. | Multiple fault localization of software programs: A systematic literature review | |
de Souza et al. | Spectrum-based software fault localization: A survey of techniques, advances, and challenges | |
CN109710514A (en) | The solution and system of tie-breaking in priorities of test cases sequence | |
Gundimeda et al. | Fuel demand elasticities for energy and environmental policies: Indian sample survey evidence | |
Posnett et al. | Ecological inference in empirical software engineering | |
Miller | Using dependency structures for prioritization of functional test suites | |
Ahmed et al. | Handling constraints in combinatorial interaction testing in the presence of multi objective particle swarm and multithreading | |
CN105868116A (en) | Semantic mutation operator based test case generation and optimization method | |
Saeed et al. | The experimental applications of search-based techniques for model-based testing: Taxonomy and systematic literature review | |
Zhang et al. | Exploring the usefulness of unlabelled test cases in software fault localization | |
CN104794059A (en) | Defect positioning method and device based on function calling records | |
Gupta et al. | An insight into test case optimization: ideas and trends with future perspectives | |
Cui et al. | Investigating the impact of multiple dependency structures on software defects | |
CN107066389A (en) | The Forecasting Methodology that software defect based on integrated study is reopened | |
Jain et al. | Energy efficiency in South Asia: Trends and determinants | |
CN103500142A (en) | Method for testing multiple target test case priorities facing dynamic Web application | |
CN108021509A (en) | Test case dynamic order method based on program behavior network polymerization | |
Chi et al. | Multi-level random walk for software test suite reduction | |
Zou et al. | An empirical study of bug fixing rate | |
Zhou et al. | Predicting concurrency bugs: how many, what kind and where are they? | |
Ahmad et al. | Factor determination in prioritizing test cases for event sequences: A systematic literature review |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |