CN107861873A - Priorities of test cases method of adjustment based on the adjustment of two attribute hierarchies - Google Patents
Priorities of test cases method of adjustment based on the adjustment of two attribute hierarchies Download PDFInfo
- Publication number
- CN107861873A CN107861873A CN201711071737.2A CN201711071737A CN107861873A CN 107861873 A CN107861873 A CN 107861873A CN 201711071737 A CN201711071737 A CN 201711071737A CN 107861873 A CN107861873 A CN 107861873A
- Authority
- CN
- China
- Prior art keywords
- test
- case
- test case
- adjustment
- cases
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012360 testing method Methods 0.000 title claims abstract description 154
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000008569 process Effects 0.000 claims abstract description 9
- 238000000605 extraction Methods 0.000 claims abstract description 5
- 238000001514 detection method Methods 0.000 claims description 4
- 230000009471 action Effects 0.000 claims description 2
- 239000012141 concentrate Substances 0.000 claims 3
- 238000005192 partition Methods 0.000 claims 2
- 241000931705 Cicada Species 0.000 claims 1
- 239000000203 mixture Substances 0.000 claims 1
- 230000007547 defect Effects 0.000 description 9
- 230000006872 improvement Effects 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013522 software testing Methods 0.000 description 1
- 238000010998 test method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Prevention of errors by analysis, debugging or testing of software
- G06F11/3668—Testing of software
- G06F11/3672—Test management
- G06F11/3684—Test management for test design, e.g. generating new test cases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Prevention of errors by analysis, debugging or testing of software
- G06F11/3668—Testing of software
- G06F11/3672—Test management
- G06F11/3688—Test management for test execution, e.g. scheduling of test suites
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
Description
技术领域technical field
本发明属于计算机软件测试方法技术领域,具体涉及一种基于两属性分级调整的测试用例优先级调整方法。The invention belongs to the technical field of computer software testing methods, and in particular relates to a test case priority adjustment method based on two-attribute hierarchical adjustment.
背景技术Background technique
近年来,人们对计算机需求正在不断增加,软件开发人员为了满足人们的需求,研发了更多的生活软件,使得移动应用的数量迅速增长,移动应用种类变得更为多种多样。由于应用软件功能越来越多,软件模块内部与模块间的业务逻辑也越来越复杂,导致软件出现缺陷的几率大大增加。因此,在完成软件的一部分功能后对其进行测试是非常必要的。然而,实际测试中由于软件开发时间紧促,留给测试过程的时间非常有限,对一个软件测试时完全按照原始排序执行测试用例无形中延长了发现错误的时间,降低测试效率。In recent years, people's demand for computers is increasing. In order to meet people's needs, software developers have developed more life software, which makes the number of mobile applications grow rapidly and the types of mobile applications become more diverse. As the application software has more and more functions, the business logic inside and between the software modules is also becoming more and more complex, which greatly increases the probability of software defects. Therefore, it is very necessary to test a part of the software after completing its functions. However, due to the short time of software development in actual testing, the time left for the testing process is very limited. Executing test cases in accordance with the original sequence when testing a software will virtually prolong the time for finding errors and reduce testing efficiency.
每个测试用例并不是孤立的,它们之间可能存在某种关系,根据这一特点,可以将测试用例集中待执行的用例与发现缺陷的用例建立联系,在执行过程中调整这些用例的执行顺序,目的在于尽早地发现相同或相似的缺陷。同时,测试用例的属性有多种,选择哪一种属性对测试用例集进行划分,决定了处于同一个“域”内的测试用例联系的紧密程度。而对用例的调整幅度会影响发现缺陷的时间,只有找到合适的划分属性和调整方案,才能有效缩短发现错误的时间。Each test case is not isolated, and there may be some relationship between them. According to this feature, the use cases to be executed in the test case set can be linked with the use cases where defects are found, and the execution order of these use cases can be adjusted during execution. , the purpose is to find the same or similar defects as early as possible. At the same time, there are many attributes of test cases, and which attribute to choose to divide the test case set determines the closeness of the test cases in the same "domain". The adjustment range of the use case will affect the time to find the defect. Only by finding the appropriate division attribute and adjustment scheme can the time to find the error be effectively shortened.
发明内容Contents of the invention
本发明的目的是提供一种基于两属性分级调整的测试用例优先级调整方法,解决了现有软件存在的测试用例与其属性间关系耦合、选择待调整测试用例针对性不强及调整力度单一的问题。The purpose of the present invention is to provide a test case priority adjustment method based on two-attribute hierarchical adjustment, which solves the problem of coupling between test cases and their attributes in existing software, selection of test cases to be adjusted is not targeted, and adjustment efforts are single question.
本发明所采用的技术方案是,基于两属性分级调整的测试用例优先级调整方法,包括以下步骤:The technical solution adopted in the present invention is a test case priority adjustment method based on two-attribute hierarchical adjustment, comprising the following steps:
步骤1,对测试用例集的划分和提取Step 1, division and extraction of test case sets
根据设计意义划分测试用例集,然后提取出拥有共有功能类型的测试用例集;Divide the test case set according to the design meaning, and then extract the test case set with a common function type;
步骤2,测试用例集划分结果的解耦Step 2, decoupling of test case set division results
借鉴场景法的思想,在操作事件流结束后补充“结果场景”:由于设计者不仅给出每个测试用例执行成功的期望结果,而且预知了其执行失败可能出现的常规场景,因此引入“可预知失败结果场景”,为每一个设计意义对应的测试用例给出其独有的“可预知失败结果场景”;通过对比实际失败场景和“可预知失败结果场景”,提高对导致测试用例执行失败的设计意义判断的准确性;Drawing on the idea of the scenario method, the "result scenario" is added after the end of the operation event flow: Since the designer not only gives the expected result of each test case's execution success, but also predicts the possible routine scenarios of its execution failure, the introduction of "can Predictable Failure Result Scenarios” provides its unique “predictable failure result scenarios” for each test case corresponding to the design meaning; by comparing the actual failure scenarios with the “predictable failure result scenarios”, improve the accuracy of test case execution failures. The accuracy of the design significance judgment;
步骤3,优先级动态调整Step 3, priority dynamic adjustment
首先将未执行用例分类划分为一级提升用例、二级提升用例以及暂不调整用例;随后根据各测试用例的历史执行情况进行初始化排序;最后根据初始化排序结果进行优先级调整。First, classify unexecuted use cases into first-level promotion use cases, second-level promotion use cases, and temporary non-adjustment use cases; then perform initialization sorting according to the historical execution status of each test case; finally, adjust the priority according to the initialization sorting results.
本发明的特征还在于,The present invention is also characterized in that,
步骤1的具体体操作为:The specific operation of step 1 is:
步骤1.1,划分测试用例集应遵循以下原则:测试用例集中多个测试用例可出自于同一个设计意义,测试用例集中一个测试用例也可包含或覆盖多个设计意义;Step 1.1, the division of test case sets should follow the following principles: multiple test cases in a test case set can come from the same design meaning, and a test case in a test case set can also contain or cover multiple design meanings;
步骤1.2,提取出拥有共有功能类型的测试用例集应遵循询以下原则:当属于不同功能模块但包含相同操作功能的测试用例的情况时,多个测试用例可属于同一个功能类型,其余一个测试用例只能覆盖一个功能类型。In step 1.2, the following principles should be followed for extracting the test case sets with common function types: when the test cases belong to different functional modules but contain the same operation function, multiple test cases can belong to the same function type, and the remaining one test case A use case can only cover one functional type.
步骤2具体操作为:The specific operation of step 2 is:
设任一设计意义为d、“可预知失败结果场景”为fau(d)、“可预知失败结果场景”集合为Fau(d),则Fau(d)={fau(d)1,fau(d)2,...,fau(d)m},其中m≥1;Let any design meaning be d, the "predictable failure result scenario" be fau(d), and the set of "predictable failure result scenarios" be Fau(d), then Fau(d)={fau(d) 1 ,fau( d) 2 ,...,fau(d) m }, where m≥1;
失败场景集合中的元素通过枚举方式得到,其元素个数可以是一个或多个;若某一测试用例执行失败,而此测试用例又覆盖了多个设计意义,则将该测试用例的实际失败场景、与其包含的每个设计意义的失败场景集合中的元素进行逐个匹配,若实际失败场景与其中某一元素匹配成功,则此元素所属的集合,即为目标设计意义的“可预知失败结果场景”集合Fau(d),进而确定目标设计意义。The elements in the failure scenario collection are obtained by enumeration, and the number of elements can be one or more; if a test case fails to execute, and this test case covers multiple design meanings, the actual The failure scenario is matched one by one with the elements in the failure scenario set of each design meaning contained in it. If the actual failure scenario matches one of the elements successfully, the set to which this element belongs is the "predictable failure" of the target design meaning. Result scenarios" set Fau(d), and then determine the target design meaning.
步骤3的具体操作过程为:The specific operation process of step 3 is:
步骤3.1,未执行用例分类划分:若在测试过程中,某一测试用例执行失败,则与该失败测试用例既属于同一功能类型,又包含相同设计意义的测试用例归为一级提升用例;将属于同一功能类型,或包含相同设计意义的测试用例归为二级提升用例;在以上两种之外的测试用例归为暂不调整用例;Step 3.1, unexecuted use case classification: If a test case fails to execute during the test, the test case that belongs to the same functional type and contains the same design meaning as the failed test case is classified as a first-level promotion use case; Test cases that belong to the same function type or contain the same design meaning are classified as second-level promotion cases; test cases other than the above two are classified as temporary non-adjustment cases;
步骤3.2,根据测试用例的历史执行情况和功能模块归类顺序,对步骤3.1中各个测试用例进行初始化排序;Step 3.2, according to the historical execution status of the test cases and the classification order of the functional modules, initialize and sort each test case in step 3.1;
步骤3.3,根据步骤3.2中初始化排序,从优先级最高的开始执行测试用例,若在测试过程中某用例执行失败,即执行结果与预期结果不同,则先将步骤3.1中一级提升用例的优先级提升到当前执行用例之后,如果该测试用例仅覆盖一个设计意义,则直接将步骤3.1中二级提升用例根据公式(1)计算,提升幅度进行调整;如果该测试用例覆盖了多个设计意义,则先通过使用步骤2确定目标设计意义,再将步骤3.1中二级提升用例根据公式(1),计算提升幅度进行调整;Step 3.3, according to the initialization sorting in step 3.2, start to execute the test cases with the highest priority. If a test case fails to execute during the test, that is, the execution result is different from the expected result, first increase the priority of the test case in step 3.1. After upgrading to the current execution use case, if the test case only covers one design meaning, then directly calculate the second-level promotion use case in step 3.1 according to formula (1), and adjust the degree of improvement; if the test case covers multiple design meanings , then first determine the meaning of the target design by using step 2, and then adjust the calculation of the improvement rate for the second-level improvement use case in step 3.1 according to formula (1);
Δrange=c(caseNum-exeNum-lv1Num) (1)Δrange=c(caseNum-exeNum-lv1Num) (1)
其中,Δrange为需要提升的等级数;caseNum为测试用例集中包含的总用例个数;exeNum为已执行的用例个数;lv1Num表示已确定为一级提升用例的个数;c为等级提升系数,用来控制提升等级的大小,且c∈(0,1)。Among them, Δrange is the number of grades that need to be upgraded; caseNum is the total number of use cases contained in the test case set; exeNum is the number of executed use cases; lv1Num represents the number of use cases that have been determined as a first-level upgrade; c is the grade upgrade coefficient, Used to control the size of the promotion level, and c∈(0,1).
步骤3.2中历史执行情况具体为测试用例的检错率或错误等级。The historical execution situation in step 3.2 is specifically the error detection rate or error level of the test case.
本发明的有益效果是:本发明一种基于两属性分级调整的测试用例优先级调整方法通过将测试用例集中的用例与两个属性建立关联关系,经过解耦及划分调整等级,在测试执行过程中对用例进行动态调整优先级,该方法提高了待调整测试用例选择的准确性,能够缩短测试中发现错误的运行时间,提高测试效率,有很好的实用价值。The beneficial effects of the present invention are: a test case priority adjustment method based on two-attribute hierarchical adjustment of the present invention establishes an association relationship between the use cases in the test case set and the two attributes, and after decoupling and division and adjustment levels, in the test execution process In this method, the priority of use cases is dynamically adjusted. This method improves the accuracy of the selection of test cases to be adjusted, can shorten the running time of errors found in the test, and improves the test efficiency. It has very good practical value.
附图说明Description of drawings
图1是本发明基于两属性分级调整的测试用例优先级调整方法的流程图;Fig. 1 is the flow chart of the test case priority adjustment method based on two attribute classification adjustments of the present invention;
图2是本发明测试用例优先级调整方法中“结果场景”的流程图;Fig. 2 is the flowchart of "result scene" in the test case priority adjustment method of the present invention;
图3是本发明测试用例优先级调整的关系图;Fig. 3 is the relationship diagram of the adjustment of test case priority of the present invention;
图4是本发明调整方法与其他各优先级算法效果对比图;Fig. 4 is a comparison diagram of the adjustment method of the present invention and other priority algorithms;
图5是本发明调整方法与其他各优先级算法的平均缺陷发现率(APFD)值对比图。Fig. 5 is a comparison chart of average defect discovery rate (APFD) values between the adjustment method of the present invention and other priority algorithms.
具体实施方式Detailed ways
下面结合附图和具体实施方式对本发明进行详细说明。The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.
本发明一种基于两属性分级调整的测试用例优先级调整方法,如图1所示,包括以下步骤:A kind of test case priority adjustment method based on two-attribute hierarchical adjustment of the present invention, as shown in Figure 1, comprises the following steps:
步骤1,对测试用例集的划分和提取Step 1, division and extraction of test case sets
步骤1.1,划分测试用例集应遵循以下原则:In step 1.1, the division of test case sets should follow the following principles:
(1)测试用例集中多个测试用例可出自于同一个设计意义,如常用的测试方法——等价类划分法,有效等价类中的测试用例都体现了测试程序能否实现预定功能的目的,无效等价类中的测试用例都体现了测试程序能否避免无效输入引起异常的目的;(2)一个执行动作可能产生多个操作结果,且这些操作结果都是判断所一个功能是否完善的依据,因此测试用例集中一个测试用例也可包含或覆盖多个设计意义。基于以上原则,如表1所示,给出以下测试用例-设计意义关联矩阵示例:(1) Multiple test cases in the test case set can come from the same design meaning, such as the commonly used test method - equivalence class division method, the test cases in the effective equivalence class reflect whether the test program can achieve the predetermined function Purpose, the test cases in the invalid equivalence class all reflect the purpose of whether the test program can avoid the exception caused by invalid input; (2) An execution action may produce multiple operation results, and these operation results are used to judge whether a function is perfect Therefore, one test case in the test case set can also contain or cover multiple design meanings. Based on the above principles, as shown in Table 1, the following test case-design significance correlation matrix example is given:
表1用例-设计意义关联矩阵Table 1 Use Case-Design Meaning Correlation Matrix
步骤1.2,提取出拥有共有功能类型的测试用例集应遵循询以下原则:In step 1.2, the extraction of test case sets with common functional types should follow the following principles:
当属于不同功能模块但包含相同操作功能的测试用例的情况时,多个测试用例可属于同一个功能类型,其余一个测试用例只能覆盖一个功能类型。When the test cases belong to different functional modules but contain the same operation function, multiple test cases can belong to the same function type, and the remaining test cases can only cover one function type.
有至少两个的模块都含有的功能类型称为共有功能类型fc,抽取出一个共有功能类型集合FC={fc1,fc2,...,fck},其中k≥0,共有功能类型集合FC包含待测试用例的全部共有功能类型。将测试用例集里的用例最大程度地分配给FC中的每个元素,剩余未被分配的测试用例,即包含模块独有功能的不再参与有关于功能类型的判断与调整。如表2所示为测试用例-功能类型关联矩阵示例:The function type that is contained in at least two modules is called the common function type fc, and a common function type set FC={fc 1 ,fc 2 ,...,fc k } is extracted, where k≥0, the common function type The set FC contains all common function types of the cases to be tested. Assign the use cases in the test case set to each element in the FC to the greatest extent, and the remaining unassigned test cases, that is, those containing the unique functions of the module, no longer participate in the judgment and adjustment of the function type. Table 2 shows an example of the test case-function type correlation matrix:
表2用例-功能类型关联矩阵Table 2 Use Case-Functional Type Correlation Matrix
步骤2,测试用例集划分结果的解耦Step 2, decoupling of test case set division results
如图2所示,借鉴场景法的思想,在操作事件流结束后补充“结果场景”:由于设计者不仅给出每个测试用例执行成功的期望结果,而且预知了其执行失败可能出现的常规场景,因此引入“可预知失败结果场景”,为每一个设计意义对应的测试用例给出其独有的“可预知失败结果场景”;通过对比实际失败场景和“可预知失败结果场景”,提高对导致测试用例执行失败的设计意义判断的准确性;As shown in Figure 2, drawing on the idea of the scenario method, the "result scenario" is added after the end of the operation event flow: because the designer not only gives the expected result of the successful execution of each test case, but also predicts the possible routine of its execution failure. Therefore, "predictable failure result scenarios" are introduced to provide its unique "predictable failure result scenarios" for each test case corresponding to the design meaning; by comparing the actual failure scenarios with the "predictable failure result scenarios", improve The accuracy of judgments about the design significance of the test case execution failure;
设任一设计意义为d、“可预知失败结果场景”为fau(d)、“可预知失败结果场景”集合为Fau(d),则Fau(d)={fau(d)1,fau(d)2,...,fau(d)m},其中m≥1;Let any design meaning be d, the "predictable failure result scenario" be fau(d), and the set of "predictable failure result scenarios" be Fau(d), then Fau(d)={fau(d) 1 ,fau( d) 2 ,...,fau(d) m }, where m≥1;
失败场景集合中的元素通过枚举方式得到,其元素个数可以是一个或多个;若某一测试用例执行失败,而此测试用例又覆盖了多个设计意义,则将该测试用例的实际失败场景、与其包含的每个设计意义的失败场景集合中的元素进行逐个匹配,若实际失败场景与其中某一元素匹配成功,则此元素所属的集合,即为目标设计意义的“可预知失败结果场景”集合Fau(d),进而确定目标设计意义。The elements in the failure scenario collection are obtained by enumeration, and the number of elements can be one or more; if a test case fails to execute, and this test case covers multiple design meanings, the actual The failure scenario is matched one by one with the elements in the failure scenario set of each design meaning contained in it. If the actual failure scenario matches one of the elements successfully, the set to which this element belongs is the "predictable failure" of the target design meaning. Result scenarios" set Fau(d), and then determine the target design meaning.
步骤3,优先级动态调整Step 3, priority dynamic adjustment
步骤3.1,未执行用例分类划分:若在测试过程中,某一测试用例执行失败,则与该失败测试用例既属于同一功能类型,又包含相同设计意义的测试用例归为一级提升用例;将属于同一功能类型,或包含相同设计意义的测试用例归为二级提升用例;在以上两种之外的测试用例归为暂不调整用例,如图3所示,用集合关系表示测试用例提升等级关系;Step 3.1, unexecuted use case classification: If a test case fails to execute during the test, the test case that belongs to the same functional type and contains the same design meaning as the failed test case is classified as a first-level promotion use case; Test cases that belong to the same function type or contain the same design meaning are classified as second-level promotion cases; test cases other than the above two types are classified as temporary non-adjustment cases, as shown in Figure 3, the test case promotion level is represented by a set relationship relation;
步骤3.2,根据测试用例的历史执行情况,如检错率、错误等级等决定各个用例的初始优先级,或可以先对其以功能模块归类,以功能模块的整体顺序为准,对步骤3.1中各个测试用例进行初始化排序;Step 3.2, determine the initial priority of each test case according to the historical execution of the test case, such as error detection rate, error level, etc., or classify it as a function module first, and the overall order of the function modules shall prevail. Step 3.1 Each test case in the initialization sequence;
步骤3.3,根据步骤3.2中初始化排序,从优先级最高的开始执行测试用例,若在测试过程中某用例执行失败,即执行结果与预期结果不同,则先将步骤3.1中一级提升用例的优先级提升到当前执行用例之后,如果该测试用例仅覆盖一个设计意义,则直接将步骤3.1中二级提升用例根据公式(1)计算,提升幅度进行调整;如果该测试用例覆盖了多个设计意义,则先通过使用步骤2确定目标设计意义,再将步骤3.1中二级提升用例根据公式(1),计算提升幅度进行调整;Step 3.3, according to the initialization sorting in step 3.2, start to execute the test cases with the highest priority. If a test case fails to execute during the test, that is, the execution result is different from the expected result, first increase the priority of the test case in step 3.1. After upgrading to the current execution use case, if the test case only covers one design meaning, then directly calculate the second-level promotion use case in step 3.1 according to formula (1), and adjust the degree of improvement; if the test case covers multiple design meanings , then first determine the meaning of the target design by using step 2, and then adjust the calculation of the improvement rate for the second-level improvement use case in step 3.1 according to formula (1);
Δrange=c(caseNum-exeNum-lv1Num) (1)Δrange=c(caseNum-exeNum-lv1Num) (1)
其中,Δrange为需要提升的等级数;caseNum为测试用例集中包含的总用例个数;exeNum为已执行的用例个数;lv1Num表示已确定为一级提升用例的个数;c为等级提升系数,用来控制提升等级的大小,且c∈(0,1)。c值的设置遵循以下原则:c值大小由测试人员视具体待测软件而定,若待测软件各功能模块之间联系紧密或功能较为相似,如管理系统中各个管理模块的业务逻辑具有很大相似性,这时c可取较大值;若待测软件各功能模块之间不具有突出的相似性,例如没有一个明显功能特点或其中包含的功能种类多样,此时c可取一个较小值。Among them, Δrange is the number of grades that need to be upgraded; caseNum is the total number of use cases contained in the test case set; exeNum is the number of executed use cases; lv1Num represents the number of use cases that have been determined as a first-level upgrade; c is the grade upgrade coefficient, Used to control the size of the promotion level, and c∈(0,1). The setting of the c value follows the following principles: the value of c is determined by the tester depending on the specific software to be tested. If the functional modules of the software to be tested are closely related or have similar functions, for example, the business logic of each management module in the management system has very If the similarity is large, then c can take a larger value; if there is no outstanding similarity between the functional modules of the software to be tested, for example, there is no obvious functional feature or it contains a variety of functions, then c can take a smaller value .
如图4所示,本发明发明基于两属性分级调整算法(TA&HA)与全程不排序(Unsorted)及以功能域划分算法(FUDD)在测试中,测试用例执行率与检错率的关系对比图,由图4可知:在对具有相同缺陷数量的软件进行测试时,TA&HA算法能够比其他两种算法在执行相同数量测试用例的情况下,发现更多的缺陷,提前了发现错误的时间。As shown in Figure 4, the present invention is based on the two-attribute hierarchical adjustment algorithm (TA&HA) and the whole process unsorted (Unsorted) and functional domain division algorithm (FUDD) in the test, the comparison diagram of the relationship between the test case execution rate and the error detection rate , it can be seen from Figure 4 that when testing the software with the same number of defects, the TA&HA algorithm can find more defects than the other two algorithms in the case of executing the same number of test cases, and the time to find errors is earlier.
如图5所示,本发明基于两属性分级调整算法(TA&HA)与全程不排序(Unsorted)及以功能域划分算法(FUDD)的APFD值对比图,由图5可知:TA&HA算法比另外两种算法的APDF值高,有更高的调整效率,能够更快地发现软件缺陷。As shown in Figure 5, the present invention is based on the APFD value comparison chart of the two-attribute hierarchical adjustment algorithm (TA&HA) and the whole process unsorted (Unsorted) and functional domain division algorithm (FUDD). The algorithm has a high APDF value, has higher adjustment efficiency, and can find software defects faster.
本发明的优先级调整方法从设计意义和功能类型两个属性方面定位和选择待调整用例,并将测试用例的调整幅度分为两个等级,在执行过程中根据实际情况分级调整,具有更为精细的选择和调整策略,能够使软件缺陷在较短时间集中暴露,可以有效降低测试成本,提高测试用例的执行效率。The priority adjustment method of the present invention locates and selects use cases to be adjusted from the two attributes of design meaning and function type, and divides the adjustment range of test cases into two levels, which are graded and adjusted according to actual conditions during the execution process, and have a more Fine selection and adjustment strategies can make software defects exposed in a short period of time, which can effectively reduce testing costs and improve the execution efficiency of test cases.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711071737.2A CN107861873B (en) | 2017-11-03 | 2017-11-03 | Test case priority adjustment method based on two-attribute hierarchical adjustment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711071737.2A CN107861873B (en) | 2017-11-03 | 2017-11-03 | Test case priority adjustment method based on two-attribute hierarchical adjustment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107861873A true CN107861873A (en) | 2018-03-30 |
CN107861873B CN107861873B (en) | 2020-07-28 |
Family
ID=61700743
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711071737.2A Active CN107861873B (en) | 2017-11-03 | 2017-11-03 | Test case priority adjustment method based on two-attribute hierarchical adjustment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107861873B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110515843A (en) * | 2019-08-13 | 2019-11-29 | 成都飞机工业(集团)有限责任公司 | Test case prioritization method based on defect set and inverted index |
CN111008137A (en) * | 2019-12-06 | 2020-04-14 | 广州品唯软件有限公司 | A method and system for customizing a test set |
CN111510839A (en) * | 2020-04-13 | 2020-08-07 | 广东思派康电子科技有限公司 | A kind of test method and test system of earphone |
CN113094251A (en) * | 2019-12-23 | 2021-07-09 | 深圳奇迹智慧网络有限公司 | Embedded system testing method and device, computer equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090217246A1 (en) * | 2008-02-27 | 2009-08-27 | Nce Technologies, Inc. | Evaluating Software Programming Skills |
CN101599044A (en) * | 2008-06-05 | 2009-12-09 | 国网南京自动化研究院 | A test case execution method |
CN102880545A (en) * | 2012-08-30 | 2013-01-16 | 中国人民解放军63928部队 | Method for dynamically adjusting priority sequence of test cases |
CN103500142A (en) * | 2013-10-12 | 2014-01-08 | 南京大学 | Method for testing multiple target test case priorities facing dynamic Web application |
CN105446885A (en) * | 2015-12-28 | 2016-03-30 | 西南大学 | Regression testing case priority ranking technology based on needs |
CN106874199A (en) * | 2017-02-10 | 2017-06-20 | 腾讯科技(深圳)有限公司 | Test case treating method and apparatus |
-
2017
- 2017-11-03 CN CN201711071737.2A patent/CN107861873B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090217246A1 (en) * | 2008-02-27 | 2009-08-27 | Nce Technologies, Inc. | Evaluating Software Programming Skills |
CN101599044A (en) * | 2008-06-05 | 2009-12-09 | 国网南京自动化研究院 | A test case execution method |
CN102880545A (en) * | 2012-08-30 | 2013-01-16 | 中国人民解放军63928部队 | Method for dynamically adjusting priority sequence of test cases |
CN103500142A (en) * | 2013-10-12 | 2014-01-08 | 南京大学 | Method for testing multiple target test case priorities facing dynamic Web application |
CN105446885A (en) * | 2015-12-28 | 2016-03-30 | 西南大学 | Regression testing case priority ranking technology based on needs |
CN106874199A (en) * | 2017-02-10 | 2017-06-20 | 腾讯科技(深圳)有限公司 | Test case treating method and apparatus |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110515843A (en) * | 2019-08-13 | 2019-11-29 | 成都飞机工业(集团)有限责任公司 | Test case prioritization method based on defect set and inverted index |
CN110515843B (en) * | 2019-08-13 | 2022-05-06 | 成都飞机工业(集团)有限责任公司 | Test case priority ordering method based on defect set and inverted index |
CN111008137A (en) * | 2019-12-06 | 2020-04-14 | 广州品唯软件有限公司 | A method and system for customizing a test set |
CN111008137B (en) * | 2019-12-06 | 2023-06-23 | 广州品唯软件有限公司 | Method and system for customizing test set |
CN113094251A (en) * | 2019-12-23 | 2021-07-09 | 深圳奇迹智慧网络有限公司 | Embedded system testing method and device, computer equipment and storage medium |
CN113094251B (en) * | 2019-12-23 | 2024-02-23 | 深圳奇迹智慧网络有限公司 | Method and device for testing embedded system, computer equipment and storage medium |
CN111510839A (en) * | 2020-04-13 | 2020-08-07 | 广东思派康电子科技有限公司 | A kind of test method and test system of earphone |
CN111510839B (en) * | 2020-04-13 | 2021-10-29 | 广东思派康电子科技有限公司 | A kind of test method and test system of earphone |
Also Published As
Publication number | Publication date |
---|---|
CN107861873B (en) | 2020-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105589806B (en) | A kind of software defect tendency Forecasting Methodology based on SMOTE+Boosting algorithms | |
CN107861873A (en) | Priorities of test cases method of adjustment based on the adjustment of two attribute hierarchies | |
CN107391369B (en) | Cross-project defect prediction method based on data screening and data oversampling | |
CN107644057B (en) | Absolute imbalance text classification method based on transfer learning | |
US20230021373A1 (en) | Utilizing automatic labelling, prioritizing, and root cause analysis machine learning models and dependency graphs to determine recommendations for software products | |
CN101907681A (en) | Dynamic Online Fault Diagnosis Method for Analog Circuits Based on GSD_SVDD | |
CN111626336A (en) | Subway fault data classification method based on unbalanced data set | |
Ye et al. | LithoROC: lithography hotspot detection with explicit ROC optimization | |
CN111062425B (en) | Unbalanced data set processing method based on C-K-SMOTE algorithm | |
CN110533116A (en) | Based on the adaptive set of Euclidean distance at unbalanced data classification method | |
CN103927483A (en) | Decision model used for detecting malicious programs and detecting method of malicious programs | |
CN112633337A (en) | Unbalanced data processing method based on clustering and boundary points | |
CN113590472B (en) | Test case priority ranking method in regression test | |
CN105389598A (en) | Feature selecting and classifying method for software defect data | |
CN111539451A (en) | Sample data optimization method, device, equipment and storage medium | |
CN113221950A (en) | Graph clustering method and device based on self-supervision graph neural network and storage medium | |
WO2016189675A1 (en) | Neural network learning device and learning method | |
CN107391452B (en) | A software defect number prediction method based on data undersampling and ensemble learning | |
CN105184486A (en) | Power grid business classification method based on directed acyclic graphs support vector machine | |
Maliah et al. | MDP-based cost sensitive classification using decision trees | |
CN112836735B (en) | An Optimized Random Forest Method for Imbalanced Datasets | |
CN111522743B (en) | A Software Defect Prediction Method Based on Gradient Boosting Tree Support Vector Machine | |
CN107341512A (en) | A kind of method and device of transfer learning classification | |
WO2023239461A1 (en) | Capacity aware cloud environment node recovery system | |
Xie et al. | Prediction of number of software defects based on smote |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |