CN117435516B - Test case priority ordering method and system - Google Patents

Test case priority ordering method and system Download PDF

Info

Publication number
CN117435516B
CN117435516B CN202311772095.4A CN202311772095A CN117435516B CN 117435516 B CN117435516 B CN 117435516B CN 202311772095 A CN202311772095 A CN 202311772095A CN 117435516 B CN117435516 B CN 117435516B
Authority
CN
China
Prior art keywords
case
test
sequence
test case
priority
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311772095.4A
Other languages
Chinese (zh)
Other versions
CN117435516A (en
Inventor
钱忠胜
俞情媛
朱辉
刘金平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi University of Finance and Economics
Original Assignee
Jiangxi University of Finance and Economics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi University of Finance and Economics filed Critical Jiangxi University of Finance and Economics
Priority to CN202311772095.4A priority Critical patent/CN117435516B/en
Publication of CN117435516A publication Critical patent/CN117435516A/en
Application granted granted Critical
Publication of CN117435516B publication Critical patent/CN117435516B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a test case priority ordering method and a test case priority ordering system, wherein the method comprises the following steps: controlling an agent to sort use case pairs according to the use case observation space and the learning strategy, determining a reward value according to an arbitration result, a historical execution time difference and position information, and adjusting the learning strategy according to the reward value; returning to execute the step of sequentially acquiring the use case observation space; updating the priority factors of each test case, and sequentially adjusting according to the updated priority factors; carrying out merging processing on the use case extraction sequence according to the updated merging block, and returning to execute the step of sequentially obtaining the use case observation space of the use case pair until the intelligent agent meets the convergence condition; and inputting the case sequence to be sequenced into the converged agent to obtain the optimal case sequencing sequence. The method and the device can automatically prioritize the sequence of the to-be-sequenced use cases based on the converged intelligent agent, and improve the accuracy of the prioritization of the test use cases.

Description

Test case priority ordering method and system
Technical Field
The present invention relates to the field of software testing technologies, and in particular, to a method and system for sequencing test case priorities.
Background
In the software development process, as the functions of the product are continuously increased and iterated, the scale of the test case set is continuously increased, and the regression test cost is continuously increased. Because the test resources are limited, the test cases cannot be executed completely, and in order to improve the regression test efficiency, a regression test strategy which better meets the regression test requirements needs to be formulated, namely, the test cases are ordered according to a certain set test target so as to determine the execution sequence of the test cases, so that the optimal test cases can be executed preferentially.
In the prior test case priority ordering process, the prior test case priority ordering process is generally set in a manual experience mode, so that the test case ordering accuracy is low, and the test case priority ordering accuracy is reduced.
Disclosure of Invention
The embodiment of the invention aims to provide a test case priority ordering method and a test case priority ordering system, which aim to solve the problem that the existing test case priority ordering accuracy is low.
The embodiment of the invention is realized in such a way that the test case prioritizing method comprises the following steps:
extracting a test case sequence library to obtain a case extraction sequence, and sequentially obtaining a case observation space of case pairs in the case extraction sequence, wherein the case pairs comprise at least one test case;
The control intelligent agent sorts the use case pairs according to the use case observation space and the learning strategy, and determines the judging result of the use case pairs according to the sorting result;
determining a rewarding value of the use case pair according to the adjudicating result, the historical execution time difference and the position information of the use case pair, and feeding back the rewarding value to the intelligent agent to adjust the learning strategy;
returning to the step of executing the case observation space for sequentially obtaining case pairs in the case extraction sequence until the case pairs in the case extraction sequence are all ordered;
updating priority factors of each test case according to the rewarding value of each case pair, and sequentially adjusting the case extraction sequences according to the updated priority factors, wherein the priority factors are used for representing the test priority potential of each test case;
updating a merging block in the use case extraction sequence after the sequence adjustment, and merging the use case extraction sequence after the sequence adjustment according to the updated merging block, wherein the merging process is used for adjusting the use case pair;
returning to the step of executing the case observation space for sequentially obtaining case pairs in the case extraction sequence according to the case extraction sequence after the merging processing until the intelligent agent meets a convergence condition;
And inputting the case sequence to be sequenced into the converged agent to obtain an optimal case sequencing sequence.
Preferably, before extracting the test case sequence library, the method further comprises:
performing feature dimension reduction on the test cases in each integration period respectively to obtain dimension reduction features, and calculating feature weights of the dimension reduction features respectively;
clustering each test case according to the characteristic weight to obtain a cluster, and determining the priority factor of the test case in each cluster according to the history execution information of each test case;
and sequencing all the test cases according to the priority factors to obtain an initial test case sequence, and returning to execute the step of respectively calculating the feature weights of all the dimension reduction features until the test cases in all the integration periods are sequenced to obtain the test case sequence library.
It is another object of an embodiment of the present invention to provide a test case prioritization system, including: the system comprises a clustering and sorting module, a rewarding factor module, a dynamic priority factor local fine adjustment module and a priority sorting module;
the rewarding factor module comprises a use case extraction module and an intelligent sorting module, and the dynamic priority factor local fine adjustment module comprises a strategy adjustment module, a priority factor updating module and a merging processing module;
The system comprises a case extraction module, a case extraction module and a test case extraction module, wherein the case extraction module is used for extracting a test case sequence library to obtain a case extraction sequence, and sequentially acquiring a case observation space of a case pair in the case extraction sequence, and the case pair comprises at least one test case;
the intelligent ordering module is used for controlling the intelligent agent to order the use case pairs according to the use case observation space and the learning strategy, and determining an adjudication result of the use case pairs according to the ordering result;
the strategy adjustment module is used for determining a rewarding value of the use case pair according to the adjudicating result, the historical execution time difference and the position information of the use case pair, and feeding back the rewarding value to the intelligent agent to adjust the learning strategy;
returning to the step of executing the case observation space for sequentially obtaining case pairs in the case extraction sequence until the case pairs in the case extraction sequence are all ordered;
the priority factor updating module is used for updating the priority factors of the test cases according to the rewarding values of the case pairs, and sequentially adjusting the case extraction sequences according to the updated priority factors, wherein the priority factors are used for representing the test priorities of the test cases;
The merging processing module is used for updating merging blocks in the case extraction sequence after the sequence adjustment, merging the case extraction sequence after the sequence adjustment according to the updated merging blocks, and the merging processing is used for adjusting the case pairs;
returning to the step of executing the case observation space for sequentially obtaining case pairs in the case extraction sequence according to the case extraction sequence after the merging processing until the intelligent agent meets a convergence condition;
the priority ranking module is used for inputting the case sequence to be ranked into the converged intelligent agent to obtain an optimal case ranking sequence;
the clustering and sequencing module is used for carrying out feature dimension reduction on the test cases in each integration period respectively to obtain dimension reduction features, and calculating feature weights of the dimension reduction features respectively;
clustering each test case according to the characteristic weight to obtain a cluster, and determining the priority factor of the test case in each cluster according to the history execution information of each test case;
and sequencing all the test cases according to the priority factors to obtain an initial test case sequence, and returning to execute the step of respectively calculating the feature weights of all the dimension reduction features until the test cases in all the integration periods are sequenced to obtain a test case sequence library.
According to the embodiment of the invention, from different angles of the adjudication result, the historical execution time difference and the position information of the case pairs, more sufficient rewarding feedback is provided for the intelligent agent, the intelligent agent is promoted to adjust the learning strategy in learning, the accuracy of the test case priority ranking of the case sequence to be ranked by the intelligent agent after convergence is improved, the test case with the wrong ranking is favorably adjusted in a local range by updating the priority factor of each test case, the learned strategy is not interfered, the wrong ranking can be adjusted in real time, the intelligent agent is guided to quickly adjust the ranking strategy, the dynamic learning environment of reinforcement learning is optimized, the reinforcement learning generalization capability is improved, and the accuracy of the test case ranking is further improved.
Drawings
FIG. 1 is a flow chart of a test case prioritization method provided by a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a test case prioritization system according to a second embodiment of the present invention;
FIG. 3 is a schematic flow chart of an implementation of the clustering and ranking module according to the second embodiment of the present invention;
FIG. 4 is a flowchart illustrating an implementation of the bonus factor module according to a second embodiment of the invention;
FIG. 5 is a schematic flow chart of an implementation of the local fine-tuning module of dynamic priority factor according to the second embodiment of the present invention;
fig. 6 is a schematic structural diagram of a terminal device according to a third embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In order to illustrate the technical scheme of the invention, the following description is made by specific examples.
Example 1
Referring to fig. 1, a flowchart of a test case prioritization method according to a first embodiment of the present invention is provided, where the test case prioritization method may be applied to any terminal device or system, and the test case prioritization method includes:
step S10, extracting a test case sequence library to obtain a case extraction sequence, and sequentially obtaining a case observation space of case pairs in the case extraction sequence;
the test case sequence library stores a plurality of initial test case sequences, the extracted initial test case sequences are case extraction sequences, the case pairs comprise at least one test case, and the case observation space refers to environmental information required by an agent learning ordering strategy.
Optionally, before extracting the test case sequence library, the method further includes:
performing feature dimension reduction on the test cases in each integration period respectively to obtain dimension reduction features, and calculating feature weights of the dimension reduction features respectively;
clustering each test case according to the characteristic weight to obtain a cluster, and determining the priority factor of the test case in each cluster according to the history execution information of each test case;
sequencing all test cases according to the priority factors to obtain an initial test case sequence, and returning to execute the step of respectively calculating the feature weights of all the dimension reduction features until the test cases in all the integration periods are sequenced to obtain the test case sequence library;
in order to provide similar and diverse initial learning environments for the intelligent agents, the LLE dimension reduction algorithm is combined with the K-means clustering technology to mine similar test cases in each integration period, initial priority factors are distributed for the test cases, and the test cases are ordered according to the initial priority factors to obtain initial test case sequences of each period. In addition, an initial environment for reinforcement learning is constructed according to the test case sequence and the features.
Further, the formula adopted for determining the priority factor of the test cases in each cluster according to the historical execution information of each test case comprises:
wherein,for integration period->After the middle is clustered->The number of test cases contained in the cluster, < +.>Representing test case +.>Is judged by->Representing integration period->Middle->Average execution time of all test cases in the cluster,/->For characterizing the importance of test case arbitration results and execution time, +.>For the total number of clusters, +.>For the priority factor, ++>,/>,/>Is the number of integration periods.
Further, the formula adopted for calculating the feature weight of each dimension reduction feature comprises:
wherein,representing test case +.>The>Features of individual dimensions>Representing the number of feature dimensions after feature dimension reduction, < ->For the total number of test cases contained in one integration period, < ->Representing the characteristic weight,/->
Preferably, clustering each test case according to the feature weight includes:
calculating the distance between each test case according to the characteristic weight, and clustering each test case according to the distance to obtain the cluster;
the K-means clustering based on LLE not only considers the nonlinear structure among test cases, but also considers the influence of different test case characteristics through weighting Euclidean distance, thereby providing effective technical support for optimizing the initial environment of reinforcement learning.
Step S20, controlling an agent to sequence the use case pairs according to the use case observation space and the learning strategy, and determining an adjudication result of the use case pairs according to the sequence result;
the agent in this embodiment is a reinforcement learning agent, which is based on an Actor-Critic (A2C) algorithm and is built by using a deep neural network, where the A2C algorithm is a framework of a learning strategy in reinforcement learning, and the reinforcement learning agent (agent) refers to an entity that can observe a learning environment, take action, and make a decision according to feedback. The agent obtains information and rewards through interaction with the learning environment. Reinforcement learning (Reinforcement Learning, RL), also known as re-excitation learning, evaluation learning, or reinforcement learning, is one of the paradigm and methodology of machine learning to describe and solve the problem of agents through learning strategies to maximize returns or achieve specific goals during interactions with an environment.
Optionally, the sorting criteria of the agent are:
wherein,for a set of test cases in one cycle, < +.>Index representing test case +.>Representation pair->Is prioritized result of- >Representing ordering according to index->And->Respectively represent test cases->And execution time.
Step S30, determining a reward value of the use case pair according to the adjudication result, the historical execution time difference and the position information of the use case pair, and feeding back the reward value to the intelligent agent to adjust the learning strategy;
in order to make the agent in reinforcement learning obtain more sufficient rewarding information and promote accuracy of fine adjustment of dynamic learning environment, the rewarding factors of reinforcement learning are designed by combining the judging result, executing time and position information of test cases.
The learning environment formed by the initial test case sequence provides an advantageous starting point for continuously acquiring the optimal ordering of the test cases of different integration periods by using reinforcement learning. During reinforcement learning, the reward factor is a key factor that motivates the agent to make accurate ordering actions. In order to provide more accurate rewarding feedback for the intelligent agent, the adjudicating result, execution time and position information of the test case are comprehensively considered, and the rewarding factor is designed based on the judgment result, the execution time and the position information.
In the test case priority study, firstly, failed test cases should be executed as much as possible, and secondly, more test cases should be executed as much as possible within a certain time to detect the accuracy of codes. Therefore, the failed test case has the highest priority, followed by the test case with less historical execution time. In addition, the position information of the test case is also a reference standard reflecting the reinforcement learning acquisition ordering result.
Optionally, determining the formula adopted by the prize value of the use case pair according to the adjudication result, the historical execution time difference and the position information of the use case pair includes:
revealing whether the ordering behaviour of said agent is correct, 0 representing an accurate behaviour, 1 representing an erroneous behaviour,/->By test case->Or test case->The adjudication result of (2) is a Boolean value, 0 indicates execution success, 1 indicates execution failure, if test case +.>Or test case->With a test case with adjudication result of 1, then +.>Set to 2, if none are present, will +.>Set to 1 @>Representing the slave test case->Or test case->Test case of highest priority selected in (A)>Is (are) located>For measuring->Super-parameters of the position weight degradation speed, +.>Representing test case +.>And test case->Prize value of case pair of composition, +.>Representing the historical execution time difference.
Step S40, returning to the step of executing the case observation space for sequentially obtaining case pairs in the case extraction sequence until the case pairs in the case extraction sequence are all ordered;
step S50, updating the priority factors of each test case according to the rewarding value of each case pair, and sequentially adjusting the case extraction sequence according to the updated priority factors;
The method comprises the steps of updating priority factors, locally trimming the sequence of test cases in each merging block according to the updated priority factors, wherein the trimmed sequence of the test cases is the object of the next round of learning and sorting, the priority factors are used for representing the test priority potential of each test case, providing a more accurate and diversified dynamic sorting environment for an intelligent agent, improving the accuracy of sorting behaviors of the intelligent agent in the face of sporadic conditions, and designing a local trimming strategy of the sequence of the test cases. And dynamically updating the priority factor of each test case according to the feedback of each round of ordering behavior in reinforcement learning, namely the rewarding value, and fine-tuning the sequence of the test cases based on the priority factor and the relative positions of the test cases.
The agent continuously adjusts the learning strategy through feedback provided by the rewarding factors to obtain the optimal ordering. However, in a continuous integration environment, test cases and optimal ranking results are different for different integration periods, which makes reinforcement learning unstable in ranking results for different periods. In addition, the dynamic learning environment plays a significant role in the decision of the agent. Therefore, a local fine tuning strategy based on a dynamic priority factor is designed to improve the flexibility of an agent in learning the optimal ordering in continuous integration and to improve the generalization capability of the agent.
Optionally, the formula adopted for updating the priority factor of each test case according to the reward value of each case pair includes:
wherein,for test cases->Priority factor of->Representing test case +.>Contribution in ranking, ++>For test cases->Is a reward value of->For merging block size, +.>Representing test case +.>Relative position in the current merge block.
Step S60, updating the merging blocks in the case extraction sequence after the sequence adjustment, and merging the case extraction sequence after the sequence adjustment according to the updated merging blocks;
the merging process is used for adjusting the case pairs, and the effect of updating the merging blocks is achieved by updating the sizes of the merging blocks to 2 times of the original sizes.
Step S70, returning to the step of sequentially obtaining the use case observation space of the use case pairs in the use case extraction sequence according to the use case extraction sequence after the merging process until the intelligent agent meets the convergence condition;
and judging that the agent meets the convergence condition until the learning of the optimal ordering strategy is finished or the threshold value of the learning step number is reached, namely, learning the optimal ordering strategy of the current integration period, and finishing the construction of the agent of the current integration period.
Optionally, in this step, the random test case sequence of the next integration period is input into the completed agent, and the optimal test case sequence of the next integration period is predicted. Training the intelligent agent by using the test cases of the current integration period, and predicting the optimal sequence of the next integration period according to the current model. This process is cycled until all integration periods are completed.
And S80, inputting the case sequence to be sequenced into the converged intelligent agent to obtain an optimal case sequencing sequence.
In this embodiment, from several different angles of the adjudication result, the historical execution time difference and the position information of the case pair, more sufficient rewarding feedback is provided for the agent, the agent is promoted to adjust the learning strategy in learning, the accuracy of the test case priority ranking of the case sequence to be ranked by the agent after convergence is improved, the test case with error ranking is favorably adjusted in a local range by updating the priority factor of each test case, the learned strategy is not interfered, and the error ranking can be adjusted in real time, so that the agent is guided to quickly adjust the ranking strategy, the dynamic learning environment of reinforcement learning is optimized, the reinforcement learning generalization capability is improved, and the accuracy of the test case ranking is further improved.
Example two
Referring to fig. 2, a schematic structural diagram of a test case prioritization system according to a second embodiment of the present invention includes: a clustering and ranking module 10, a rewarding factor module 11, a dynamic priority factor local trimming module 12 and a prioritization module 13.
The rewarding factor module 11 comprises a use case extraction module and an intelligent sorting module, and the dynamic priority factor local fine adjustment module 12 comprises a strategy adjustment module, a priority factor updating module and a merging processing module.
The clustering and ranking module 10 is configured to: performing feature dimension reduction on the test cases in each integration period respectively to obtain dimension reduction features, and calculating feature weights of the dimension reduction features respectively;
clustering each test case according to the characteristic weight to obtain a cluster, and determining the priority factor of the test case in each cluster according to the history execution information of each test case;
and sequencing all the test cases according to the priority factors to obtain an initial test case sequence, and returning to execute the step of respectively calculating the feature weights of all the dimension reduction features until the test cases in all the integration periods are sequenced to obtain a test case sequence library.
The system comprises a case extraction module, a case extraction module and a case extraction module, wherein the case extraction module is used for extracting a test case sequence library to obtain a case extraction sequence, and sequentially acquiring a case observation space of a case pair in the case extraction sequence, and the case pair comprises at least one test case.
The intelligent ordering module is used for controlling the intelligent agent to order the use case pairs according to the use case observation space and the learning strategy, and determining an adjudication result of the use case pairs according to the ordering result;
the strategy adjustment module is used for determining a rewarding value of the use case pair according to the adjudicating result, the historical execution time difference and the position information of the use case pair, and feeding back the rewarding value to the intelligent agent to adjust the learning strategy;
returning to the step of executing the case observation space for sequentially obtaining case pairs in the case extraction sequence until the case pairs in the case extraction sequence are all ordered;
the priority factor updating module is used for updating the priority factors of the test cases according to the rewarding values of the case pairs, and sequentially adjusting the case extraction sequences according to the updated priority factors, wherein the priority factors are used for representing the test priorities of the test cases;
the merging processing module is used for updating merging blocks in the case extraction sequence after the sequence adjustment, merging the case extraction sequence after the sequence adjustment according to the updated merging blocks, and the merging processing is used for adjusting the case pairs;
Returning to the step of executing the case observation space for sequentially obtaining case pairs in the case extraction sequence according to the case extraction sequence after the merging processing until the intelligent agent meets a convergence condition;
the priority ranking module 13 is configured to input the to-be-ranked use case sequence into the converged agent to obtain an optimal use case ranking sequence.
Referring to fig. 3, a flow chart of the implementation of the clustering and sequencing module 10 is shown:
1) Related concepts
Definition 1. Duration integration period: the process of verifying the stability and accuracy of code by continuously executing a series of test cases.
Each integration period is recorded asWherein->Representing a set of test case sets; />Represents the adjudication result of the test case, +.>Is a boolean value, 0 indicates execution success, and 1 indicates execution failure.
Definition 2. Dimension reduction feature of test case: and (5) performing dimension reduction on the original test case features by using an LLE dimension reduction method.
Any one test caseIs marked as->The characteristics after LLE dimension reduction are marked as
Each sustained integration period contains a set of test cases whose original features have high-dimensional and nonlinear relationships that are detrimental to mining potential correlations between them. Therefore, the LLE algorithm capable of processing the nonlinear relation and retaining the local characteristics is adopted to reduce the dimension of the original characteristics.
Definition 3, weighted euclidean distance: euclidean distance based on feature weights of test cases after dimension reduction.
The traditional K-means clustering algorithm based on Euclidean distance has poor data processing effect of a nonlinear structure, and only takes accumulated differences among vectors into consideration, so that a clustering result is inaccurate. In order to relieve the influence of test case similarity mining in continuous integration of the problem and improve the accuracy of test case clustering results in an integration period, a weighted Euclidean distance based on different characteristic influences is designed. The calculation of the feature weight is shown in formula one.
(equation one);
wherein,representing test case +.>The>Features of individual dimensions>Representing the number of feature dimensions after feature dimension reduction, < ->For the total number of test cases contained in one integration period, < ->Representing the characteristic weight,/->
The weighted euclidean distance calculation is shown in formula two:
(formula II);
wherein,and->Respectively represent +.>Person and->Test cases.
In this embodiment, for each sustained integration period, their test case dimension is reduced toDimension, then aggregate to +.>And a cluster.
Definition 4. Priority factor: one factor quantifying the potential advantages of a test case.
Initial priority factor: to optimize the initial learning environment of reinforcement learning, an initial priority factor is introduced. Will last for the integration periodTest case->Is defined as +.>The calculation method is shown in a formula III:
(equation three);
wherein,for cycle->After LLE-K-means clustering +.>The number of test cases contained in the clusters; />Representing test case +.>Is determined according to the result of the judgment; />Representation->Middle->Average execution time of all test cases in the clusters; />For quantifying the importance of test case arbitration results and execution time. When the number of failed test cases is larger, the weight can be adjusted to a larger value, so that the intelligent agent is more concerned about the test cases in the sorting process, and vice versa.
And after the initial priority factor of the test case is obtained by using the formula III, sequencing the test case according to the factor to form the initial test case. Their features constitute the initial learning environment for reinforcement learning.
2) Initial test case ordering using LLE's K-means clusters
The LLE-based K-means clustering method not only considers the nonlinear structures among test cases, but also considers the influence of different test case characteristics through weighted Euclidean distance, thereby providing effective technical support for optimizing the initial environment of reinforcement learning. The specific flow of acquiring the initial test case sequence of each integration period and the initial learning environment of reinforcement learning is as follows:
Step 1.1: extracting original characteristics of test cases in each integration period, and reducing the original characteristics of the test cases in each integration period to the original characteristics of the test cases in each integration period in turn by utilizing LLE dimension reduction algorithmAnd a dimension.
Step 1.2: using equation oneCalculating the weight of each feature after dimension reduction, calculating the distance between test cases by using a formula II, clustering the test cases according to the distance, and gathering the test cases intoAnd a cluster.
Step 1.3: and according to the clustering result, counting the execution results and the historical execution time of all the test cases in each cluster, calculating the priority factor of each cluster by using a formula III, and distributing the factor to each test case in the cluster. Note that: to promote the learning efficiency of reinforcement learning in the continuously integrated test case priorities and to increase the generalization ability of reinforcement learning, the priority factor of each test case is not further subdivided, but is used as the initial priority factor of the test cases in the cluster.
Step 1.4: and sequencing the test cases in the integration period according to the initial priority factors to form an initial test case sequence. The features of the test cases in the sequence constitute an initial learning environment for reinforcement learning.
Step 1.5: steps 1.2 through 1.4 are repeated until all test cases in the integration period have been ordered. They are then saved to a test case library to continuously learn test case priorities for different periods.
Referring to fig. 4, a flowchart of the implementation of the bonus factor module 11 is shown:
in the test case priority study, firstly, failed test cases should be executed as much as possible, and secondly, more test cases should be executed as much as possible within a certain time to detect the accuracy of codes. Therefore, the failed test case has the highest priority, followed by the test case with less historical execution time. In addition, the position information of the test case is also a reference standard reflecting the reinforcement learning acquisition ranking result, and therefore, the bonus factor is designed based on the arbitration result, the execution time, and the position information. The prize factor and prize value acquisition process is as follows:
step 2.1: and extracting an initial test case sequence of one integration period from the test case library to obtain a case extraction sequence.
Step 2.2: obtaining the use case observation space of the first use case pair in the use case extraction sequence
Step 2.3: the intelligent agent sorts the test cases in the case pairs according to the test case observation space of the current case pair and the learning strategy, and the sorting criterion is shown in a formula IV:
(equation four);
wherein,for a set of test cases in one cycle, < +.>Index representing test case +.>Representation pair->Is prioritized result of->Representing ordering according to index->And->Respectively represent test cases->And execution time.
Step 2.4: and acquiring an adjudication result, a historical execution time difference and position information of the case pair according to the case extraction sequence of the current case pair to obtain a reward factor. And calculating the rewarding value of the agent for sequencing the test cases in the case pair by using the rewarding factor. If the reward value is positive, the agent is indicated to make correct ordering actions, otherwise, the agent is indicated to make wrong ordering actions. The expression of the prize value is shown in equation five:
(equation five);
wherein, when the merging block size is 2, the case pair comprises two test cases,revealing whether the ordering behaviour of said agent is correct, 0 representing an accurate behaviour, 1 representing an erroneous behaviour,/->By test case->Or test case->The adjudication result of (2) is a Boolean value, 0 indicates execution success, 1 indicates execution failure, if test case +.>Or test case->With a test case with adjudication result of 1, then +. >Set to 2, if none are present, will +.>Set to 1 @>Representing slave test casesOr test case->Test case of highest priority selected in (A)>Is (are) located>For measuring->Super-parameters of the position weight degradation speed, +.>Representing test case +.>And test case->Prize value of case pair of composition, +.>Representing the historical execution time difference.
Step 2.5: and saving the rewarding value, the test case in the current case pair and the position information, feeding back the value to the intelligent agent, and adjusting the learning strategy of reinforcement learning.
Step 2.6: and selecting the next use case pair, and repeatedly executing the steps 2.2 to 2.5 until the test use cases of the current sequence are all ordered.
Referring to fig. 5, a flow chart of the implementation of the local fine-tuning module 12 for dynamic priority factors is shown:
step 3.1: and obtaining the rewarding values of all pairs of test cases in the current round and the corresponding positions (i.e. indexes) of all pairs of test cases through the rewarding factors, and updating the priority factors of the test cases by using the information.
Dynamic priority factor: in order to enable the intelligent agent to continuously adapt to the dynamic environment and reduce the influence of the error sequencing behavior on subsequent learning, a dynamic priority factor is introduced, and the calculation process is shown as a formula six:
(formula six);
wherein,for the last round of test case +.>If the previous round of ordering is the initial sequence +.>,/>Representing test case +.>Contribution in this round of ordering, +.>For test cases->Is a prize value for (1); />Merging block size ordered for each round; />Representing test case +.>Relative position in the current merge block.
Step 3.2: and locally fine-tuning the sequence of the test cases in each merging block according to the updated priority factors, wherein the fine-tuned case extraction sequence is the object of the next round of learning sequencing.
Step 3.3: when the size of the updated merging block is 2 times of the original size, for example, when the size of the current merging block is 2, the test cases in the case extraction sequence are combined two by two according to the sorting order to obtain a case pair, when the merging block is updated, the size of the merging block is updated to be 2 times of the original size, namely, the size of the merging block is updated to be 4, and when the updated merging block is used for merging, the test cases in the case extraction sequence are combined four by four according to the sorting order.
Step 3.4: and continuing to execute the steps 2.2 to 2.6 and the steps 3.1 to 3.3 until the learning of the optimal sequencing strategy is completed or the threshold value of the learning step number is reached, wherein the intelligent agent meets the convergence condition. That is, the optimal ordering strategy for the current integration period has been learned, and the construction of the agent for the current integration period is complete.
Step 3.5: and inputting the case sequence to be sequenced into the converged agent to obtain the optimal case sequencing sequence.
Optionally, the continuous learning process of the agent may further be: training the intelligent agent by using the test cases of the current integration period, and predicting the optimal sequence of the next integration period according to the current model. This process is cycled until all integration periods are completed.
The embodiment provides a test case prioritization system for performing test case prioritization by combining K-means clustering of LLE and dynamic priority factors, and further improves test case prioritization efficiency in an integrated test environment by means of reinforcement learning technology. This is mainly due to:
1) And capturing similar test cases by using a K-means clustering method of LLE, and sequencing the initial test cases. Compared with other clustering methods, the K-means clustering method combined with LLE considers the nonlinear structure of the test case characteristics, and improves the accuracy of test case clustering. In addition, the test cases are ordered on the basis, so that a similar and various learning environment is provided for reinforcement learning, and the learning ordering efficiency and generalization capability of the reinforcement learning are improved.
2) A bonus factor based on the arbitration result, execution time, and location information is designed. On the basis of the historical execution time of the test cases, the execution time difference among the test cases is calculated, so that under the condition that the judging results are the same, the intelligent agent sequences the test cases more accurately. In addition, the position of the test case is also the basis for reflecting whether the sorting action of the agent is correct, so that the position information is also included in the rewarding factors. From several different angles of the adjudication result, the historical execution time and the position of the test case, more sufficient rewards feedback is provided for the intelligent agent, and the adjustment of the ordering strategy in learning is promoted.
3) A local fine tuning strategy for dynamic priority factors is presented. Dynamically updating the dynamic priority factor of the test case according to the reward value of each round of sorting of reinforcement learning. And after each round of learning is finished, the sequence of the test case is dynamically and locally fine-tuned by updating the priority factor and the size of each round of merging block. On the one hand, the test cases with obvious error sequencing are adjusted in a local range, so that the learned strategies are not interfered, and the error sequencing can be adjusted in real time, so that an intelligent agent is guided to quickly adjust the sequencing strategies. On the other hand, the dynamic learning environment of reinforcement learning is optimized, and the generalization capability of reinforcement learning is improved.
Example III
Fig. 6 is a block diagram of a terminal device 2 according to a third embodiment of the present application. As shown in fig. 6, the terminal device 2 of this embodiment includes: a processor 20, a memory 21 and a computer program 22, such as a program of a test case prioritization method, stored in the memory 21 and executable on the processor 20. The steps of the various embodiments of the test case prioritization methods described above are implemented by the processor 20 when executing the computer program 22.
Illustratively, the computer program 22 may be partitioned into one or more modules that are stored in the memory 21 and executed by the processor 20 to complete the present application. The one or more modules may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program 22 in the terminal device 2. The terminal device may include, but is not limited to, a processor 20, a memory 21.
The processor 20 may be a central processing unit (Central Processing Unit, CPU) +graphics processor (Graphic Processing Unit, GPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 21 may be an internal storage unit of the terminal device 2, such as a hard disk or a memory of the terminal device 2. The memory 21 may be an external storage device of the terminal device 2, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 2. Further, the memory 21 may also include both an internal storage unit and an external storage device of the terminal device 2. The memory 21 is used for storing the computer program as well as other programs and data required by the terminal device. The memory 21 may also be used for temporarily storing data that has been output or is to be output.
In addition, each functional module in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Wherein the computer readable storage medium may be nonvolatile or volatile. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each method embodiment described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable storage medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable storage medium may be appropriately scaled according to the requirements of jurisdictions in which such computer readable storage medium does not include electrical carrier signals and telecommunication signals, for example, according to jurisdictions and patent practices.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (8)

1. A method for prioritizing test cases, the method comprising the steps of:
extracting a test case sequence library to obtain a case extraction sequence, and sequentially obtaining a case observation space of case pairs in the case extraction sequence, wherein the case pairs comprise at least one test case;
the control intelligent agent sorts the use case pairs according to the use case observation space and the learning strategy, and determines the judging result of the use case pairs according to the sorting result;
determining a rewarding value of the use case pair according to the adjudicating result, the historical execution time difference and the position information of the use case pair, and feeding back the rewarding value to the intelligent agent to adjust the learning strategy;
Returning to the step of executing the case observation space for sequentially obtaining case pairs in the case extraction sequence until the case pairs in the case extraction sequence are all ordered;
updating the priority factors of each test case according to the reward values of each case pair, and sequentially adjusting the case extraction sequences according to the updated priority factors, wherein the priority factors are used for representing the test priority of each test case;
updating a merging block in the use case extraction sequence after the sequence adjustment, and merging the use case extraction sequence after the sequence adjustment according to the updated merging block, wherein the merging process is used for adjusting the use case pair;
returning to the step of executing the case observation space for sequentially obtaining case pairs in the case extraction sequence according to the case extraction sequence after the merging processing until the intelligent agent meets a convergence condition;
inputting the case sequence to be sequenced into the converged intelligent agent to obtain an optimal case sequencing sequence;
before extracting the test case sequence library, the method further comprises the following steps:
performing feature dimension reduction on the test cases in each integration period respectively to obtain dimension reduction features, and calculating feature weights of the dimension reduction features respectively;
Clustering each test case according to the characteristic weight to obtain a cluster, and determining the priority factor of the test case in each cluster according to the history execution information of each test case;
and sequencing all the test cases according to the priority factors to obtain an initial test case sequence, and returning to execute the step of respectively calculating the feature weights of all the dimension reduction features until the test cases in all the integration periods are sequenced to obtain the test case sequence library.
2. The method of claim 1, wherein determining the formulas adopted by the priority factors of the test cases in each cluster according to the historical execution information of each test case comprises:
wherein,for integration period->After the middle is clustered->The number of test cases contained in the cluster, < +.>Representing test case +.>Is judged by->Representing integration period->Middle->Average execution time of all test cases in the cluster,/->For characterizing the importance of test case arbitration results and execution time, +.>For the total number of clusters, +.>For the priority factor, ++>,/>,/>Is the number of integration periods.
3. The method of claim 2, wherein determining the formula for the prize value for the use case pair based on the adjudication result, the historical execution time difference, and the location information for the use case pair comprises:
Revealing whether the ordering behaviour of said agent is correct, 0 representing an accurate behaviour, 1 representing an erroneous behaviour,/->By test case->Or test case->The adjudication result of (2) is a Boolean value, 0 indicates execution success, 1 indicates execution failure, if test case +.>Or test case->With a test case with adjudication result of 1, then +.>Set to 2, if none are present, will +.>Set to 1 @>Representing the slave test case->Or test case->Test case of highest priority selected in (A)>Is (are) located>For measuring->Super-parameters of the position weight degradation speed, +.>Representing test case +.>And test case->Prize value of case pair of composition, +.>Representing the historical execution time difference.
4. A method of prioritizing test cases according to claim 3, wherein updating the formulas used by the priority factors of each test case based on the prize values of each pair of cases comprises:
wherein,for test cases->Priority factor of->Representing test case +.>Contribution in ranking, ++>For test cases->Is used to determine the prize value of (1),/>for merging block size, +.>Representing test case +.>Relative position in the current merge block, < >>Is the total number of test cases contained in one integration period.
5. The method of claim 4, wherein the formula for calculating the feature weights of the dimension reduction features comprises:
wherein,representing test case +.>The>Features of individual dimensions>Representing the number of feature dimensions after feature dimension reduction, < ->For the total number of test cases contained in one integration period, < ->Representing the characteristic weight,/->
6. The method of claim 5, wherein clustering test cases according to the feature weights comprises:
and calculating the distance between each test case according to the characteristic weight, and clustering each test case according to the distance to obtain the cluster.
7. The method of claim 6, wherein the ranking criteria of the agent is:
wherein,for a set of test cases in one cycle, < +.>Index representing test case +.>Representation pair->Is prioritized result of->Representing ordering according to index->And->Respectively represent test cases->Adjudication result and execution time, < >>Or 2.
8. A test case prioritization system applicable to the test case prioritization method of any one of claims 1 to 7, the system comprising: the system comprises a clustering and sorting module, a rewarding factor module, a dynamic priority factor local fine adjustment module and a priority sorting module; the rewarding factor module comprises a use case extraction module and an intelligent sorting module, and the dynamic priority factor local fine adjustment module comprises a strategy adjustment module, a priority factor updating module and a merging processing module;
The system comprises a case extraction module, a case extraction module and a test case extraction module, wherein the case extraction module is used for extracting a test case sequence library to obtain a case extraction sequence, and sequentially acquiring a case observation space of a case pair in the case extraction sequence, and the case pair comprises at least one test case;
the intelligent ordering module is used for controlling the intelligent agent to order the use case pairs according to the use case observation space and the learning strategy, and determining an adjudication result of the use case pairs according to the ordering result;
the strategy adjustment module is used for determining a rewarding value of the use case pair according to the adjudicating result, the historical execution time difference and the position information of the use case pair, and feeding back the rewarding value to the intelligent agent to adjust the learning strategy;
returning to the step of executing the case observation space for sequentially obtaining case pairs in the case extraction sequence until the case pairs in the case extraction sequence are all ordered;
the priority factor updating module is used for updating the priority factors of the test cases according to the rewarding values of the case pairs, and sequentially adjusting the case extraction sequences according to the updated priority factors, wherein the priority factors are used for representing the test priorities of the test cases;
The merging processing module is used for updating merging blocks in the case extraction sequence after the sequence adjustment, merging the case extraction sequence after the sequence adjustment according to the updated merging blocks, and the merging processing is used for adjusting the case pairs;
returning to the step of executing the case observation space for sequentially obtaining case pairs in the case extraction sequence according to the case extraction sequence after the merging processing until the intelligent agent meets a convergence condition;
the priority ranking module is used for inputting the case sequence to be ranked into the converged intelligent agent to obtain an optimal case ranking sequence;
the clustering and sequencing module is used for carrying out feature dimension reduction on the test cases in each integration period respectively to obtain dimension reduction features, and calculating feature weights of the dimension reduction features respectively;
clustering each test case according to the characteristic weight to obtain a cluster, and determining the priority factor of the test case in each cluster according to the history execution information of each test case;
and sequencing all the test cases according to the priority factors to obtain an initial test case sequence, and returning to execute the step of respectively calculating the feature weights of all the dimension reduction features until the test cases in all the integration periods are sequenced to obtain a test case sequence library.
CN202311772095.4A 2023-12-21 2023-12-21 Test case priority ordering method and system Active CN117435516B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311772095.4A CN117435516B (en) 2023-12-21 2023-12-21 Test case priority ordering method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311772095.4A CN117435516B (en) 2023-12-21 2023-12-21 Test case priority ordering method and system

Publications (2)

Publication Number Publication Date
CN117435516A CN117435516A (en) 2024-01-23
CN117435516B true CN117435516B (en) 2024-02-27

Family

ID=89546597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311772095.4A Active CN117435516B (en) 2023-12-21 2023-12-21 Test case priority ordering method and system

Country Status (1)

Country Link
CN (1) CN117435516B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106502892A (en) * 2016-10-20 2017-03-15 杭州电子科技大学 A kind of test case prioritization method based on uml model
CN109783349A (en) * 2018-12-10 2019-05-21 江苏大学 A kind of priorities of test cases sort method and system based on dynamical feedback weight
CN111427802A (en) * 2020-06-09 2020-07-17 南京大学 Test method and system for carrying out test case priority sequencing by utilizing ensemble learning
CN113377651A (en) * 2021-06-10 2021-09-10 中国矿业大学 Class integration test sequence generation method based on reinforcement learning
CN114911704A (en) * 2022-05-20 2022-08-16 苏州浪潮智能科技有限公司 Interface test case generation method, device and equipment based on reinforcement learning
CN115470133A (en) * 2022-09-20 2022-12-13 西南民族大学 Large-scale continuous integrated test case priority ordering method, equipment and medium
CN115952078A (en) * 2022-12-05 2023-04-11 平安银行股份有限公司 Test case sequencing method, device and system and storage medium
CN116149978A (en) * 2021-11-22 2023-05-23 北京字节跳动网络技术有限公司 Service interface testing method and device, electronic equipment and storage medium
CN116662182A (en) * 2023-06-02 2023-08-29 厦门理工学院 Regression testing mixing method, device and storage medium in continuous integrated environment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11249887B2 (en) * 2019-08-27 2022-02-15 Nec Corporation Deep Q-network reinforcement learning for testing case selection and prioritization

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106502892A (en) * 2016-10-20 2017-03-15 杭州电子科技大学 A kind of test case prioritization method based on uml model
CN109783349A (en) * 2018-12-10 2019-05-21 江苏大学 A kind of priorities of test cases sort method and system based on dynamical feedback weight
CN111427802A (en) * 2020-06-09 2020-07-17 南京大学 Test method and system for carrying out test case priority sequencing by utilizing ensemble learning
CN113377651A (en) * 2021-06-10 2021-09-10 中国矿业大学 Class integration test sequence generation method based on reinforcement learning
CN116149978A (en) * 2021-11-22 2023-05-23 北京字节跳动网络技术有限公司 Service interface testing method and device, electronic equipment and storage medium
CN114911704A (en) * 2022-05-20 2022-08-16 苏州浪潮智能科技有限公司 Interface test case generation method, device and equipment based on reinforcement learning
CN115470133A (en) * 2022-09-20 2022-12-13 西南民族大学 Large-scale continuous integrated test case priority ordering method, equipment and medium
CN115952078A (en) * 2022-12-05 2023-04-11 平安银行股份有限公司 Test case sequencing method, device and system and storage medium
CN116662182A (en) * 2023-06-02 2023-08-29 厦门理工学院 Regression testing mixing method, device and storage medium in continuous integrated environment

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A Clustering Approach to Improving Test Case Prioritization: An Industrial Case Study;Ryan Carlson* et al.;《2011 27th IEEE International Conference on Software Maintenance (ICSM)》;20111231;全文 *
User Session-Based Test Case Generation and Optimization Using Genetic Algorithm*;Zhongsheng Qian;《J. Software Engineering & Applications》;20201231;全文 *
Weighted Reward for Reinforcement Learning based Test Case Prioritization in Continuous Integration Testing;Guowei Li et al.;《2021 IEEE 45th Annual Computers, Software, and Applications Conference (COMPSAC)》;20211231;全文 *
一种基于强化学习的持续集成环境中测试用例排序技术;赵逸凡等;《软件学报》;20221118;全文 *
面向持续集成测试优化的强化学习奖励机制;何柳柳;杨羊;李征;赵瑞莲;;软件学报;20190515(05);全文 *

Also Published As

Publication number Publication date
CN117435516A (en) 2024-01-23

Similar Documents

Publication Publication Date Title
CN109902753B (en) User recommendation model training method and device, computer equipment and storage medium
CN109961098B (en) Training data selection method for machine learning
CN111967971B (en) Bank customer data processing method and device
CN109886343B (en) Image classification method and device, equipment and storage medium
CN111310918B (en) Data processing method, device, computer equipment and storage medium
CN115391561A (en) Method and device for processing graph network data set, electronic equipment, program and medium
CN110263136B (en) Method and device for pushing object to user based on reinforcement learning model
CN111046156B (en) Method, device and server for determining rewarding data
CN117435516B (en) Test case priority ordering method and system
CN113191880A (en) Bank teller terminal cash adding suggestion determination method and device
CN111967581A (en) Interpretation method and device of clustering model, computer equipment and storage medium
CN111242274A (en) Method for analyzing a set of neural network parameters
CN110705889A (en) Enterprise screening method, device, equipment and storage medium
CN112906435B (en) Video frame optimization method and device
CN113191527A (en) Prediction method and device for population prediction based on prediction model
CN112817525A (en) Method and device for predicting reliability grade of flash memory chip and storage medium
CN114465957B (en) Data writing method and device
CN110288091A (en) Parametric learning method, device, terminal device and readable storage medium storing program for executing
CN116029370B (en) Data sharing excitation method, device and equipment based on federal learning of block chain
CN112669893B (en) Method, system, device and equipment for determining read voltage to be used
CN111563548B (en) Data preprocessing method, system and related equipment based on reinforcement learning
CN109905340B (en) Feature optimization function selection method and device and electronic equipment
US20230259682A1 (en) Buffer insertion method and device, storage medium, and electronic device
CN116579555A (en) Task scheduling method and system based on artificial fish shoals
CN117010272A (en) Decision generation method, device, computer equipment and medium based on reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant