WO2022061779A1 - 基于变更相关性分析的测试用例选择方法及装置 - Google Patents

基于变更相关性分析的测试用例选择方法及装置 Download PDF

Info

Publication number
WO2022061779A1
WO2022061779A1 PCT/CN2020/117921 CN2020117921W WO2022061779A1 WO 2022061779 A1 WO2022061779 A1 WO 2022061779A1 CN 2020117921 W CN2020117921 W CN 2020117921W WO 2022061779 A1 WO2022061779 A1 WO 2022061779A1
Authority
WO
WIPO (PCT)
Prior art keywords
test case
candidate
correlation
measurement value
candidate test
Prior art date
Application number
PCT/CN2020/117921
Other languages
English (en)
French (fr)
Inventor
彭飞
Original Assignee
西门子股份公司
西门子(中国)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 西门子股份公司, 西门子(中国)有限公司 filed Critical 西门子股份公司
Priority to PCT/CN2020/117921 priority Critical patent/WO2022061779A1/zh
Priority to EP20954631.6A priority patent/EP4202689A4/en
Priority to CN202080104035.3A priority patent/CN116097227A/zh
Publication of WO2022061779A1 publication Critical patent/WO2022061779A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3676Test management for coverage analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites

Definitions

  • the present disclosure generally relates to the field of software development, and in particular, to a test case selection method and apparatus based on change correlation analysis, which is suitable for the integration of R&D, operation and maintenance (DevOps) process.
  • DevOps operation and maintenance
  • software testing should be integrated into the process of code development, code construction, and code deployment, and performed in a continuous manner, so that software updates can be implemented in a continuous process from development to build to deployment. realized in the process. For example, whenever a software developer delivers a software product, the code build process is triggered. The continuous integration engine will compile the source code and run a set of automated unit tests. If the code does not compile successfully or any unit tests fail, the code build process will fail and the software developer will be notified. If the code build process completes successfully, the code is accepted for delivery and deployed to the test environment for further verification, such as performing functional tests. If all tests pass, the software version can be released and can be deployed into production.
  • the present disclosure provides a test case selection method and test case selection apparatus based on change dependency analysis for DevOps processes.
  • the test case selection method and the test case selection device By using the test case selection method and the test case selection device, the test verification of the application program change can be realized in a shorter time and with higher efficiency, thereby ensuring the delivery quality during the continuous evolution and change of the software.
  • a test case selection method based on change correlation analysis for an integrated process of R&D, operation and maintenance, including: obtaining a distributed trace record of a historical test case set executed for an application and the application Changed code change information, the distributed trace record is used to show the processing flow of the user transaction triggered when the test case is executed in the application program; according to the distributed trace record and the code change information, determine the candidate a correlation measure of each candidate test case in the test case set relative to the application program change; and selecting a target test case from the candidate test case set according to the correlation measure of each candidate test case.
  • the correlation measurement value of each historical test case relative to the application program change is determined according to the distributed trace records of the historical test cases and the code change information of the application program change, and the correlation measurement value of each historical test case is used.
  • the correlation metric value may include at least one of the following metric values: test coverage, test intensity, and test efficiency.
  • test coverage by using the test coverage, test intensity and test efficiency to calculate the correlation measurement value, the degree of correlation between the test case and the application change can be more accurately reflected, so that the test cases can be more accurately selected. Test cases with high correlation can further improve the test effect.
  • the test coverage includes component coverage and/or communication path coverage
  • the test efficiency includes time efficiency and/or work efficiency
  • the correlation metric value includes a normalized correlation metric value.
  • selecting the target test case from the candidate test case set according to the correlation measurement value of each candidate test case may include: determining each candidate test case according to the correlation measurement value of each candidate test case. performance measurement values of candidate test cases; and selecting target test cases from the candidate test case set according to the performance measurement values of each candidate test case.
  • the target test case selection can be achieved taking into account the test performance, thereby ensuring that the application changes test effect.
  • each correlation measurement value has a weighted weight
  • determining the performance measurement value of each candidate test case according to the correlation measurement value of each candidate test case may include: according to the correlation measurement value of each candidate test case.
  • the selection of target test cases can be more in line with application scenarios or specific requirements.
  • selecting a target test case from the candidate test case set according to the performance measurement value of each candidate test case may include: selecting a performance measurement value higher than a predetermined value from the candidate test case set.
  • the candidate test case of the threshold value is used as the target test case; the candidate test case whose performance measurement value is within Top N is selected from the candidate test case set as the target test case, and the N is a predetermined positive integer or predetermined or select the target test case from the candidate test case set according to the performance test value of each candidate test case based on a first predetermined test case selection strategy, the first predetermined test case selection strategy is based on each candidate test case Combination selection strategy derived from performance metrics of candidate test cases.
  • the selected target test cases can be suitable for different application scenarios, for example, with different application environments, different test purposes or different application requirements. application scenarios.
  • selecting the target test case from the candidate test case set according to the correlation measurement value of each candidate test case may include: providing the correlation measurement value of each candidate test case to the test case.
  • test case selection model is trained by using the correlation measurement value obtained based on the historical data as the model training data of the test case selection model, and the target test case is selected by using the trained test case selection model, which can improve the target The accuracy of the test case determination.
  • selecting a target test case from the set of candidate test cases according to the correlation measurement value of each candidate test case may include: selecting a strategy based on a second predetermined test case, according to each candidate test case The target test case is selected from the candidate test case set by the correlation measurement value of the use case, and the second predetermined test case selection strategy is a combined selection strategy obtained based on the correlation measurement value of each candidate test case.
  • the selection of the target test case can be made more convenient and efficient.
  • a test case selection apparatus based on change dependency analysis for a DevOps process, comprising: a trace record acquisition unit configured to acquire a distributed set of historical test cases executed for an application program A trace record, the distributed trace record is used to show the processing flow of the user transaction triggered when the test case is executed in the application program; the code change information acquisition unit is configured to acquire the code change information of the application program change; related a performance metric value determination unit configured to, according to the distributed trace record and the code change information, determine the relevance metric value of each candidate test case in the candidate test case set relative to the application program change; and test case selection The unit is configured to select a target test case from the candidate test case set according to the correlation measurement value of each candidate test case.
  • the correlation metric value includes at least one of the following metric values: test coverage, test intensity, and test efficiency.
  • the test coverage includes component coverage and/or communication path coverage
  • the test efficiency includes time efficiency and/or work efficiency
  • the test case selection unit includes: a performance measurement value determination module, configured to determine the performance measurement value of each candidate test case according to the correlation measurement value of each candidate test case; and a test case selection module configured to select a target test case from the candidate test case set according to the performance measurement value of each candidate test case.
  • each correlation measurement value has a weighted weight
  • the performance measurement value determination module is configured to use the correlation measurement value of each candidate test case and the respective weighted weight to determine each candidate test case. Performance metrics for test cases.
  • the test case selection module is configured to: select a candidate test case whose performance measurement value is higher than a predetermined threshold from the candidate test case set as the target test case; In the candidate test case set, select a candidate test case whose performance measurement value is within Top N, as the target test case, where N is a predetermined positive integer or a predetermined proportional value; or based on the first predetermined test case selection strategy, according to The target test case is selected from the candidate test case set based on the performance measure value of each candidate test case, and the first predetermined test case selection strategy is a combined selection strategy obtained based on the performance measure value of each candidate test case.
  • the test case selection unit is configured to provide the test case selection model with the correlation measurement value of each candidate test case to determine the target test case.
  • the test case selection unit is configured to select a target from the candidate test case set according to the correlation measurement value of each candidate test case based on a second predetermined test case selection strategy.
  • the second predetermined test case selection strategy is a combined selection strategy obtained based on the correlation measurement value of each candidate test case.
  • a computing device comprising: at least one processor; and a memory coupled with the at least one processor configured to store instructions that, when executed by the at least one processor When executed, the at least one processor is caused to execute the test case selection method as described above.
  • a machine-readable storage medium storing executable instructions that, when executed, cause the machine to perform the test case selection method as described above.
  • a computer program product tangibly stored on a computer-readable medium and comprising computer-executable instructions that, when executed, cause at least one The processor executes the test case selection method as described above.
  • FIG. 1 shows an example flowchart of a test case selection method based on change dependency analysis for a DevOps process according to an embodiment of the present disclosure.
  • FIG. 2 shows an example schematic diagram of an application with a microservices architecture according to an embodiment of the present disclosure.
  • FIG. 3 shows an example schematic diagram of a distributed trace recording according to an embodiment of the present disclosure.
  • FIG. 4 shows an example schematic diagram of a span information list according to an embodiment of the present disclosure.
  • FIG 5 shows an example schematic diagram of communication paths involved in changing components in an application according to an embodiment of the present disclosure.
  • FIG. 6 shows an example schematic diagram of a metric value of a normalized test intensity according to an embodiment of the present disclosure.
  • Figure 7 shows an example schematic diagram of applying different test case selection strategies in the DevOps process.
  • FIG. 8 shows an example block diagram of a test case selection apparatus based on change dependency analysis for a DevOps process according to an embodiment of the present disclosure.
  • FIG. 9 shows a block diagram of an implementation example of a test case selection unit according to an embodiment of the present disclosure.
  • FIG. 10 shows a schematic diagram of a computing device for implementing a test case selection process based on change dependency analysis according to an embodiment of the present disclosure.
  • the term “including” and variations thereof represent open-ended terms meaning “including but not limited to”.
  • the term “based on” means “based at least in part on”.
  • the terms “one embodiment” and “an embodiment” mean “at least one embodiment.”
  • the term “another embodiment” means “at least one other embodiment.”
  • the terms “first”, “second”, etc. may refer to different or the same objects. Other definitions, whether explicit or implicit, may be included below. The definition of a term is consistent throughout the specification unless the context clearly dictates otherwise.
  • software testing should be integrated into the code development, code build, and code deployment process and be performed in a continuous manner, thereby enabling software updates to be performed in a continuous process from development to build to deployment realized in.
  • the above software testing method can also be called continuous testing or continuous testing.
  • Proper implementation of continuous testing ensures that software quality is thoroughly assessed at every step of the DevOps process, making it possible to rapidly deliver and deploy defect-free software versions. Therefore, continuous testing is widely used in software development.
  • DevOps software development is to shorten the delivery time of applications and increase the release frequency of applications, so that the development time and effort that can be invested in the release or change of each application is relatively limited.
  • industrial-grade software is highly functionally and architecturally complex, the amount of work required to verify an application by testing even a small application change can be significant.
  • a typical microservices-based application consists of a set of independent components.
  • components can be deployed independently, easy to scale and support parallel development across multiple teams, etc.
  • this partitioning based on independently deployable components introduces additional complexity to test verification.
  • these components are developed independently, they must work together, eg, need to interact with each other to handle a specific user task. Therefore, to have sufficient confidence in the overall quality of the software, it is not enough to test each component in isolation, all components must be tested together. Therefore, even a small incremental code change for an application needs to perform a large number of test verifications, such as interface testing, functional testing, performance testing, etc., and the actual workload may be staggering.
  • embodiments of the present disclosure provide a test case selection method and test case selection apparatus based on change dependency analysis for DevOps processes.
  • the distributed trace records of the historical test case set executed for the application program and the code change information of the application program changes are obtained, and candidate tests are determined according to the distributed trace records and the code change information
  • a measure of the relevance of each candidate test case in the set to the application change is selected from the set of candidate test cases to perform test validation , which can reduce the number of test cases executed by the test, shorten the time required for the test, and improve the test efficiency.
  • FIG. 1 shows an example flowchart of a test case selection method 100 based on change dependency analysis for a DevOps process according to an embodiment of the present disclosure.
  • a distributed trace of a historical set of test cases executed against the application is obtained.
  • distributed tracing tools such as Zipkin, Jaeger, OpenCensus, etc. can be utilized to obtain distributed trace records for historical test case sets executed by the application.
  • the distributed trace record is used to show the processing flow of the user transaction triggered when the test case is executed in the application program.
  • the communication path and interaction information with the application components involved in the processing of the user transaction triggered when the test case is executed are recorded. Therefore, it can be determined whether the application component is checked during the execution of the test case by detecting whether there is a specific application component in the distributed trace record. The degree of relevance of the test case to the application change can then be assessed by comparing the changed application component specifically involved in the application change to be verified with the application component actually examined during the execution of the test case.
  • Figure 2 shows an example schematic diagram of an application with a microservices architecture according to an embodiment of the present disclosure.
  • the application has 9 service components, Service 1 to Service 9.
  • Each service component operates its own processing flow independently, and communicates and interacts with other service components to jointly realize specific business functions.
  • FIG. 3 shows an example schematic diagram of a distributed trace recording according to an embodiment of the present disclosure.
  • the distributed trace shown in FIG. 3 is a distributed trace of the historical set of test cases executed for the application shown in FIG. 2 .
  • the distributed trace record is used to show the overall processing flow on how user transactions triggered by executing test cases are processed in the application, which is also used in the distributed chain trace field. It may be referred to as a distributed trace.
  • a distributed trace record (trace) consists of multiple time spans.
  • “duration” in Figure 3 is used to represent the total processing time of the distributed trace
  • "service number” is used to represent the number of services in the distributed trace
  • “total span” is used to represent the distributed trace The total number of spans.
  • span is used to represent individual processing segments in the overall processing flow.
  • a span can be a time period that a specific operation of a component takes.
  • some spans can have parent-child relationships with other spans.
  • span 1 has two sub-spans, span 2 and span 4.
  • span 2 When one operation passes data to another operation or uses the functions provided by the other operation, there is a parent-child relationship between the spans corresponding to the two operations.
  • the parent span has a child span in a different application component, it means that there is an interaction between the two application components, for example, a request sent by one application component calls another application component operation in .
  • FIG. 4 shows an example schematic diagram of a span information list according to an embodiment of the present disclosure.
  • the span information list shown in FIG. 4 is derived from the distributed trace records shown in FIG. 3
  • FIG. 5 shows the communication paths involved in the changed application components in the application according to an embodiment of the present disclosure.
  • Example schematic Example schematic.
  • code change information for application changes is obtained.
  • the source code of the application is stored in the configuration management database, and software configuration management tools such as ClearCase, Subversion, Git, etc. can be used to directly obtain the code changes of an application change from the configuration management database.
  • Information for example, which parts of the application were changed. For example, in the example shown in Figure 2, in the most recent application changes, changes were made to three components Service 1, Service 5 and Service 9, for example, the service components shown with the symbol "! in Figure 2 .
  • a dependency measure of each candidate test case in the set of candidate test cases with respect to the application changes is determined based on the distributed trace records and the code change information.
  • examples of correlation metrics may include, but are not limited to, test coverage, test frequency, test efficiency, and any combination thereof.
  • the correlation ie, correlation
  • the correlation needs to be evaluated from three levels of test coverage, test frequency, and test efficiency.
  • test coverage is used to reflect whether the changed parts of the application can actually be verified by the test cases to be evaluated. In other words, if the test case is related to application changes, the changed parts of the application should be checked during the execution of the test case.
  • examples of test coverage may include component coverage and/or communication path coverage.
  • component coverage is used to quantitatively reflect the test coverage of the changed application components by the checks made by the test case being evaluated.
  • the component coverage CCov can be calculated using the following equation (1):
  • N trace paths represent the number of changed application components that appear in the distributed trace record
  • N total represents the total number of changed application components in the application
  • a communication path exists when two application components interact.
  • An application component usually has multiple communication paths associated with it. Therefore, if an application component is changed, it is necessary to verify that all communication paths associated with that application component are functioning properly.
  • the communication path coverage can also be considered when performing correlation analysis between test cases and application changes.
  • the term "communication path coverage" is used to reflect the proportion of communication paths associated with changed application components relative to all communication paths associated with changed application components that are actually checked during test case execution. The greater the value of the communication path coverage, the greater the relevance of the test case to the application change.
  • the communication path coverage CPCov can be calculated using the following equation (2):
  • P trace paths represents the number of communication paths associated with changed application components that appear in the distributed trace record
  • P total represents the total number of communication paths associated with changed application components
  • test intensity TInt can be defined as the number of spans related to the changed application component. In the example shown in Figure 2, there are three spans related to the changed application component, whereby the test intensity Tint has a value of 3. The greater the value of the test intensity Tint, the greater the correlation between test cases and application changes.
  • test efficiency can also be considered when performing correlation analysis between test cases and application changes.
  • examples of test efficiency may include, but are not limited to, time efficiency TEff and work efficiency EEff.
  • time efficiency TEff is expressed in terms of the percentage of time spent by changed application components in the total processing time of the distributed trace.
  • the time efficiency TEff can be calculated using the following formula (3):
  • Tchanged components represents the time spent on changed application components
  • Ttotal represents the total processing time of distributed trace records
  • efficiency EEff is expressed in terms of the ratio of the number of spans associated with changed application components to the total number of spans in the distributed trace.
  • the work efficiency EEff can be calculated using the following formula (4):
  • S changed components represents the number of spans associated with changed application components
  • S total represents the total number of spans in the distributed trace record
  • a target test case is selected from the candidate test case set according to the correlation measurement value of each candidate test case.
  • selecting the target test case from the candidate test case set according to the correlation measurement value of each candidate test case may include: determining each candidate test case according to the correlation measurement value of each candidate test case. performance metrics for the use cases; and selecting target test cases from the set of candidate test cases based on the performance metrics for each candidate test case.
  • the value of each weighted weight is a decimal value between 0 and 1, and the sum of all weighted weights is equal to 1.
  • weighted weights can be first assigned to test coverage, test strength and test efficiency measures, and then, there are multiple calculation indicators for a correlation measure
  • the test coverage has two indicators of component coverage and communication path coverage
  • the test efficiency has two indicators of time efficiency and work efficiency
  • the weighted weights assigned to the test coverage can be reassigned, That is, the reassigned weights are assigned to the component coverage and communication path coverage indicators.
  • the weighted weight assigned to the test efficiency is reassigned, that is, the reassigned weight is assigned to the time efficiency and the work efficiency.
  • the weighted weights can be evenly allocated to the three correlation metrics of test coverage, test strength and test efficiency, so that the weighted weights obtained by each of them are 1/3, and then the weighted weights of test coverage and test efficiency can be allocated. 1/3 is equally distributed again, so that the weighted weight of component coverage, communication path coverage, time efficiency and work efficiency is 1/6. Finally, when calculating the performance metrics, the weighted weights of component coverage, communication path coverage, time efficiency and work efficiency are respectively 1/6, and the weighted weight of test intensity is 1/3.
  • FIG. 6 shows an example schematic diagram of a metric value of a normalized test intensity according to an embodiment of the present disclosure.
  • the label "AVERAGE" represents the average of the TInt values calculated with all candidate test cases.
  • a performance measure is calculated using the normalized correlation measure.
  • a candidate test whose performance measurement value is higher than a predetermined threshold may be selected from the candidate test case set Use cases, as target test cases.
  • candidate test cases with performance metric values greater than 0.5 may be determined as target test cases.
  • a candidate test case whose performance measurement value is within Top N may be selected from the candidate test case set as a target test case, where N is a predetermined positive integer or a predetermined proportional value.
  • the 10 candidate test cases ranked in the Top 10 can be determined as the target test cases, or the candidate test cases ranked in the Top 20% can be determined as the target test cases.
  • the target test case may be selected from the candidate test case set according to the performance test value of each candidate test case based on the first predetermined test case selection strategy.
  • the first predetermined test case selection strategy is a combined selection strategy obtained based on the performance measurement values of each candidate test case.
  • an example of a first predetermined test case selection strategy may be ((P(f),Top(20%))AND(P(f),HigherThan(0.5))), indicating that a performance measure is selected from candidate test cases Test cases ranked in the Top 20% and the performance metric value greater than 0.5 are used as target test cases. It should be noted that what is shown above is only an example of the first predetermined test case selection strategy, and other suitable combination selection strategies may be adopted in this specification.
  • the correlation measurement value of each candidate test case may also be provided.
  • the test case selection model can be trained using the correlation measurement value obtained based on the historical data as the model training data.
  • the selection strategy when selecting the target test case from the candidate test case set according to the correlation measurement value of each candidate test case, the selection strategy may be based on the second predetermined test case, according to each candidate test case Test case dependencies are used to select target test cases from the set of candidate test cases.
  • the second predetermined test case selection strategy may be a combined selection strategy obtained based on the correlation measurement value of each candidate test case.
  • an example of the second predetermined test case selection strategy may be ((TInt, HigherThan(AVERAGE ⁇ 150%))OR((CCov,TopValues(30%))AND(CPCov,TopValues(30%))), which means From the candidate test cases, the test cases whose test intensity value is greater than 1.5 times of the average test intensity of all test cases and whose component coverage and communication path coverage are ranked in the Top 30% are selected as target test cases. It should be noted that what is shown above is only an example of the second predetermined test case selection strategy, and other suitable combination selection strategies may be adopted in this specification.
  • test case selection strategies are described above, and different test case selection strategies can be flexibly implemented in the DevOps process according to different processing stages and quality verification goals. This flexibility greatly facilitates the implementation of continuous testing in the DevOps process.
  • Figure 7 shows an example schematic diagram of applying different test case selection strategies in the DevOps process.
  • the DevOps process includes three levels of testing, namely, testing against the developer branch, testing against the feature branch, and testing against the release branch. Different levels of testing have different quality verification concerns, and accordingly, the adopted test case selection strategies will also be different.
  • a developer branch is a local copy of an application's codebase for software developers to submit changes to their code. Testing against the developer branch is designed to provide developers with quick quality feedback, such as whether the application works as expected after the developer makes changes to the application. That is, the focus of test verification is to quickly detect whether there are defects in the application. From this, a test case selection strategy based on performance metrics can be selected. For example, weighted weights 0.2, 0.4, 0.4 are assigned to test coverage, test intensity, and test efficiency, and the weighted weights represent priorities in performance evaluation.
  • test case selection strategy (P(f), TopValues(20%), where, The weighted weights used in the calculation of performance metrics are 0.2, 0.4, and 0.4 for test coverage, test intensity, and test efficiency, respectively.
  • a feature branch is shared by several developers working together on a task. Whenever a developer thinks their work is done, the code changes in their corresponding developer branch are committed to the feature branch. Testing for feature branches should ensure coordination between application changes made by different developers and provide timely feedback if a code change submitted by one developer causes an overall failure of the software. As such, test coverage and test intensity are more important when considering test case selection criteria for feature branch testing. At the same time, the frequency of developers submitting code changes to the feature branch is usually less than that in their own developer branch, so compared to the test case selection in the developer branch, the requirements for test efficiency can be relaxed. In this case, a test case selection strategy based on performance metrics can also be used.
  • the weights assigned to Test Coverage, Test Strength, and Test Efficiency need to be adjusted, for example, adjust the weights assigned to Test Coverage, Test Strength, and Test Efficiency to 0.4, 0.4, 0.2.
  • the test case selection strategy can be (P(f), TopValues(50%), where the weights used for the test coverage, test strength and test efficiency in the calculation of performance metrics are 0.4, 0.4, and 0.2, respectively.
  • the release branch is the basis for formal software releases. After all the work is done, the code changes in the feature branch will finally be merged here. Testing against release branches is designed to maximize the ability to detect failures associated with code changes.
  • the following test case selection strategy can be used: (CCov, HigherThan(0)). This means that if a test case can check at least one changed component, it should be selected as the target test case. That is to say, in the test for the release branch, it is hoped to run all the test cases that can check the changed components, regardless of the correlation measurement value of its test intensity and test efficiency, so as to maximize the software defect detection ability.
  • test case selection method based on change dependency analysis for a DevOps process is described above with reference to FIGS. 1 to 7 .
  • the correlation measurement value between each historical test case and the application program change is determined according to the distributed trace records of the historical test cases and the code change information of the application program change, and the correlation measurement value of each historical measurement case is used to determine the correlation measurement value of each historical measurement case.
  • test coverage, test intensity, and test efficiency as correlation measures, the correlation between test cases and application changes can be more accurately reflected, and the correlation between test cases and application changes can be more accurately selected. High-performance test cases, thereby further improving test efficiency.
  • the selection of target test cases can be more in line with application scenarios or specific requirements.
  • the selected target test cases can be adapted to different application scenarios, for example, with different application environments, different test purposes or different application requirements. application scenarios.
  • the selection of the target test case can be made more convenient and efficient.
  • FIG. 8 shows an example block diagram of a test case selection apparatus 800 based on change dependency analysis for a DevOps process according to an embodiment of the present disclosure.
  • the test case selection apparatus 800 includes a trace record acquisition unit 810 , a code change information acquisition unit 820 , a correlation measurement value determination unit 830 and a test case selection unit 840 .
  • the trace record acquisition unit 810 is configured to acquire distributed trace records of historical test case sets executed for the application, where the distributed trace records are used to show the processing flow in the application of user transactions triggered when the test cases are executed.
  • the operation of the trace record obtaining unit 810 may refer to the operation of 110 described above with reference to FIG. 1 .
  • the code change information acquisition unit 820 is configured to acquire code change information of application changes.
  • the operation of the code change information acquisition unit 820 may refer to the operation of 120 described above with reference to FIG. 1 .
  • the correlation metric value determination unit 830 is configured to determine the correlation metric value of each candidate test case in the candidate test case set with respect to the application program change according to the distributed trace record and the code change information.
  • the operation of the correlation metric value determination unit 830 may refer to the operation of 130 described above with reference to FIG. 1 .
  • the test case selection unit 840 is configured to select the target test case from the candidate test case set according to the correlation measurement value of each candidate test case.
  • the operation of the test case selection unit 840 may refer to the operation of 140 described above with reference to FIG. 1 .
  • the correlation metric value includes at least one of the following metric values: test coverage, test intensity, and test efficiency.
  • the test coverage includes component coverage and/or communication path coverage
  • the test efficiency includes time efficiency and/or work efficiency
  • FIG. 9 shows a block diagram of an implementation example of the test case selection unit 840 according to an embodiment of the present disclosure.
  • the test case selection unit 840 includes a performance measurement value determination module 841 and a test case selection module 843 .
  • the performance measurement value determination module 841 is configured to determine the performance measurement value of each candidate test case according to the correlation measurement value of each candidate test case. Then, the test case selection module 843 selects the target test case from the candidate test case set according to the performance measurement value of each candidate test case.
  • each correlation measure has a weighted weight. Accordingly, the performance measurement value determination module 841 is configured to determine the performance measurement value of each candidate test case using the correlation measurement value of each candidate test case and the respective weighting weight.
  • the test case selection module 843 is configured to select a candidate test case whose performance metric value is higher than a predetermined threshold from the candidate test case set as a target test case. In another example, the test case selection module 843 is configured to select a candidate test case whose performance measurement value is within Top N from the candidate test case set, as the target test case, where N is a predetermined positive integer or a predetermined proportional value. In addition, in another example, the test case selection module 843 is configured to select the target test case from the candidate test case set according to the performance measurement value of each candidate test case based on the first predetermined test case selection strategy, the first predetermined test case The selection strategy is a combined selection strategy based on the performance metrics of each candidate test case.
  • test case selection unit 840 is configured to provide the correlation measure of each candidate test case to the test case selection model to determine the target test case.
  • test case selection unit 840 is configured to select the target test case from the candidate test case set based on the second predetermined test case selection strategy and the correlation measurement value of each candidate test case.
  • the second predetermined test case selection strategy is a combined selection strategy obtained based on the correlation measurement value of each candidate test case.
  • test case selection method may be implemented by hardware, or may be implemented by software or a combination of hardware and software.
  • FIG. 10 shows a schematic diagram of a computing device 1000 for implementing a test case selection process based on change dependency analysis according to an embodiment of the present disclosure.
  • computing device 1000 may include at least one processor 1010, memory (eg, non-volatile memory) 1020, memory 1030, and communication interface 1040, and at least one processor 1010, memory 1020, memory 1030, and communication The interfaces 1040 are connected together via a bus 1060 .
  • At least one processor 1010 executes at least one computer-readable instruction stored or encoded in memory (ie, the above-described elements implemented in software).
  • computer-executable instructions are stored in memory that, when executed, cause at least one processor 1010 to: obtain a distributed trace record of a historical set of test cases for application execution and code change information for application changes, The distributed trace record is used to show the processing flow in the application program of the user transaction triggered when the test case is executed; according to the distributed trace record and code change information, it is determined that each candidate test case in the candidate test case set is relative to the application program a dependency measure of the change; and selecting a target test case from the set of candidate test cases based on the dependency measure of each candidate test case.
  • a program product such as a machine-readable medium (eg, a non-transitory machine-readable medium)
  • a machine-readable medium may have instructions (ie, the above-described elements implemented in software) that, when executed by a machine, cause the machine to perform the various operations and functions described above in connection with FIGS. 1-9 in various embodiments of this specification .
  • a system or an apparatus equipped with a readable storage medium may be provided on which software program codes for implementing the functions of any of the above-described embodiments are stored, and a computer or a computer or apparatus of the system or apparatus may be provided.
  • the processor reads and executes the instructions stored in the readable storage medium.
  • the program code itself read from the readable medium can implement the functions of any one of the above-described embodiments, and thus the machine-readable code and the readable storage medium storing the machine-readable code constitute the present invention a part of.
  • Examples of readable storage media include floppy disks, hard disks, magneto-optical disks, optical disks (eg, CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD-RW), magnetic tape, non- Volatile memory cards and ROMs.
  • the program code may be downloaded from a server computer or the cloud over a communications network.
  • the device structure described in the above embodiments may be a physical structure or a logical structure, that is, some units may be implemented by the same physical entity, or some units may be implemented by multiple physical entities, or may be implemented by multiple physical entities. Some components in separate devices are implemented together.
  • the hardware units or modules may be implemented mechanically or electrically.
  • a hardware unit, module or processor may include permanent dedicated circuits or logic (eg, dedicated processors, FPGAs or ASICs) to perform corresponding operations.
  • the hardware unit or processor may also include programmable logic or circuits (such as a general-purpose processor or other programmable processors), which may be temporarily set by software to complete corresponding operations.
  • the specific implementation mechanical, or dedicated permanent circuit, or temporarily provided circuit

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

一种适用于研发运维一体化流程的基于变更相关性分析的测试用例选择方法及装置,在该方法中,获取针对应用程序执行的历史测试用例集的分布式跟踪记录以及应用程序变更的代码变化信息,所述分布式跟踪记录用于示出测试用例执行时触发的用户事务在应用程序中的处理流程。此外,根据分布式跟踪记录和代码变化信息,确定候选测试用例集中的各个候选测试用例相对于应用程序变更的相关性度量值。然后,根据各个候选测试用例的相关性度量值,从候选测试用例集中选择目标测试用例。利用该方法,可以以自动化方式来根据具体代码变动以及质量验证目标选择相关性最佳的测试用例集,由此在应用程序的持续演进和变更过程中确保软件交付质量。

Description

基于变更相关性分析的测试用例选择方法及装置 技术领域
本公开通常涉及软件开发领域,尤其涉及适用于研发运维一体化(DevOps)过程的基于变更相关性分析的测试用例选择方法及装置。
背景技术
研发运维一体化是当今软件行业最流行的软件开发方式,其核心思想是引入定时发布和持续集成,从而缩短软件开发周期、提高发布频率。虽然这种方式可以有效地缩短软件交付时间并且降低软件变更成本,但其也对测试和质量验证提出了挑战,即,如何在软件的持续演进和变更过程中以较短的时间和较少的精力投入来确保软件的交付质量。
在传统软件项目中,软件测试通常由专门的测试团队来执行,项目被明确地划分为开发阶段和测试阶段,并且需要更多时间来保证软件经过充分的测试验证。这些经验已经不再适用于DevOps的软件开发流程。
在基于DevOps的软件开发过程中,软件测试应该集成到代码开发、代码构建和代码部署的过程中,并且以持续不断的方式执行,由此使得软件更新可以在一个从开发到构建到部署的连续过程中实现。例如,每当软件开发人员交付软件产品时,都会触发代码构建过程。持续集成引擎将编译源代码并运行一组自动化的单元测试。如果代码编译不成功或任何单元测试失败,则代码构建过程将失败,并将通知软件开发人员。如果代码构建过程成功完成,则接受代码交付并将其部署到测试环境中进行进一步验证,例如执行功能测试。如果所有测试都通过,则可以发布该软件版本,并且可以将该软件版本部署到生产环境中。通过这种方式,可以实现不间断地开发、交付、测试和部署软件代码。由此可见,DevOps流程中要求的是对软件代码变更实现早期验证、频繁验证、自动化验证的系统化测试方法,也可以称为连续测试或持续测试。持续测试的正确实施可以确保在DevOps过程中的每一步都对软件质量进行全面评估,从而使快速交付和部署无缺 陷的软件版本成为可能。正因如此,持续测试作为一种最佳实践已经在软件开发组织中得到广泛应用。在进行持续测试时,一个亟待解决的关键问题就是,如何在软件开发的特定阶段选择合适的测试用例来为应用程序变更进行质量验证。
发明内容
鉴于上述,本公开提供一种用于DevOps过程的基于变更相关性分析的测试用例选择方法和测试用例选择装置。利用该测试用例选择方法及测试用例选择装置,可以以较短的时间和较高的效率来实现应用程序变更的测试验证,由此在软件的持续演进和变更过程中确保交付质量。
根据本公开的一个方面,提供一种用于研发运维一体化流程的基于变更相关性分析的测试用例选择方法,包括:获取针对应用程序执行的历史测试用例集的分布式跟踪记录以及应用程序变更的代码变化信息,所述分布式跟踪记录用于示出测试用例执行时触发的用户事务在所述应用程序中的处理流程;根据所述分布式跟踪记录和所述代码变化信息,确定候选测试用例集中的各个候选测试用例相对于所述应用程序变更的相关性度量值;以及根据各个候选测试用例的相关性度量值,从所述候选测试用例集中选择目标测试用例。
利用上述方法,通过根据历史测试用例的分布式跟踪记录以及应用程序变更的代码变化信息来确定各个历史测试用例相对于应用程序变更的相关性度量值,并使用各个历史测试用例的相关性度量值来选择目标测试用例,可以仅仅使用与该应用程序变更相关性最佳的测试用例来执行测试验证,从而可以减少测试验证所需要执行的测试用例数量,缩短测试所需时间,同时提升测试效率。
可选地,在上述方面的一个示例中,所述相关性度量值可以包括下述度量值中的至少一个:测试覆盖度、测试强度和测试效率。
利用上述方法,通过将测试覆盖度、测试强度和测试效率用于计算相关性度量值,可以更为准确地反映测试用例与应用程序变更之间的关联程度,由此可以更为准确地选择出相关性高的测试用例,从而进一步提升测试效果。
可选地,在上述方面的一个示例中,所述测试覆盖度包括组件覆盖度和/或通信路径覆盖度,和/或所述测试效率包括时间效率和/或工作效率。
利用上述方法,通过将组件覆盖度和/或通信路径覆盖度用作测试覆盖度以及将时间效率和/或工作效率用作测试效率,可以更为准确地确定出测试用例与应用程序变更之间的关联度,由此提高测试用例的相关性度量值确定的精度,从而进一步提升测试效果。
可选地,在上述方面的一个示例中,所述相关性度量值包括归一化后的相关性度量值。
利用上述方法,通过使用归一化后的相关性度量值来确定测试用例与应用程序变更之间的相关度,可以消除由于相关性度量值的度量单位不同而引入的不利影响,由此提升相关性度量值确定的精度。
可选地,在上述方面的一个示例中,根据各个候选测试用例的相关性度量值,从所述候选测试用例集中选择目标测试用例可以包括:根据各个候选测试用例的相关性度量值,确定各个候选测试用例的性能度量值;以及根据各个候选测试用例的性能度量值,从所述候选测试用例集中选择目标测试用例。
利用上述方法,通过使用相关性度量值来确定测试用例的性能度量值,并且使用性能度量值来选择目标测试用例,可以在考虑测试性能的情况下实现目标测试用例选择,由此确保应用程序变更的测试效果。
可选地,在上述方面的一个示例中,各个相关性度量值具有加权权重,根据各个候选测试用例的相关性度量值,确定各个候选测试用例的性能度量值可以包括:根据各个候选测试用例的相关性度量值以及各自的加权权重,确定各个候选测试用例的性能度量值。
利用上述方法,通过为相关性度量值赋予不同的加权权重,可以使得目标测试用例的选择更加符合应用场景或具体需求。
可选地,在上述方面的一个示例中,根据各个候选测试用例的性能度量值,从所述候选测试用例集中选择目标测试用例可以包括:从所述候选测试用例集中选择性能度量值高于预定阈值的候选测试用例,作为所述目标测试用例;从所述候选测试用例集中选择性能测量值位于Top N之内的候选测试用例,作为所述目标测试用例,所述N为预定正整数或者预定比 例值;或者基于第一预定测试用例选择策略,根据各个候选测试用例的性能度试值来从所述候选测试用例集中选择所述目标测试用例,所述第一预定测试用例选择策略是基于各个候选测试用例的性能度量值得到的组合选择策略。
利用上述方法,通过使用不同的测试选择策略来执行目标测试用例选择,可以使得所选择出的目标测试用例可以适用于不同的应用场景,例如,具有不同应用环境、不同测试目的或者不同应用程序要求的应用场景。
可选地,在上述方面的一个示例中,根据各个候选测试用例的相关性度量值,从所述候选测试用例集中选择目标测试用例可以包括:将各个候选测试用例的相关性度量值提供给测试用例选择模型来确定目标测试用例。
利用上述方法,通过将基于历史数据得到的相关性度量值作为测试用例选择模型的模型训练数据来训练出测试用例选择模型,并且使用训练出的测试用例选择模型来选择目标测试用例,可以提升目标测试用例确定的精度。
可选地,在上述方面的一个示例中,根据各个候选测试用例的相关性度量值,从所述候选测试用例集中选择目标测试用例可以包括:基于第二预定测试用例选择策略,根据各个候选测试用例的相关性度量值来从所述候选测试用例集中选择目标测试用例,所述第二预定测试用例选择策略是基于各个候选测试用例的相关性度量值得到的组合选择策略。
利用上述方法,通过使用基于相关性度量值得到的组合策略来选择目标测试用例,可以使得目标测试用例的选择更加简便和高效。
根据本公开的另一方面,提供一种用于DevOps过程的基于变更相关性分析的测试用例选择装置,包括:跟踪记录获取单元,被配置为获取针对应用程序执行的历史测试用例集的分布式跟踪记录,所述分布式跟踪记录用于示出测试用例执行时触发的用户事务在所述应用程序中的处理流程;代码变化信息获取单元,被配置为获取应用程序变更的代码变化信息;相关性度量值确定单元,被配置为根据所述分布式跟踪记录和所述代码变化信息,确定候选测试用例集中的各个候选测试用例相对于所述应用程序变更的相关性度量值;以及测试用例选择单元,被配置为根据各个候选测试用例的相关性度量值,从所述候选测试用例集中选择目标测试用例。
可选地,在上述方面的一个示例中,所述相关性度量值包括下述度量值中的至少一个:测试覆盖度、测试强度和测试效率。
可选地,在上述方面的一个示例中,所述测试覆盖度包括组件覆盖度和/或通信路径覆盖度,和/或所述测试效率包括时间效率和/或工作效率。
可选地,在上述方面的一个示例中,所述测试用例选择单元包括:性能度量值确定模块,被配置为根据各个候选测试用例的相关性度量值,确定各个候选测试用例的性能度量值;以及测试用例选择模块,被配置为根据各个候选测试用例的性能度量值,从所述候选测试用例集中选择目标测试用例。
可选地,在上述方面的一个示例中,各个相关性度量值具有加权权重,所述性能度量值确定模块被配置为使用各个候选测试用例的相关性度量值以及各自的加权权重,确定各个候选测试用例的性能度量值。
可选地,在上述方面的一个示例中,所述测试用例选择模块被配置为:从所述候选测试用例集中选择性能度量值高于预定阈值的候选测试用例,作为所述目标测试用例;从所述候选测试用例集中选择性能度量值位于Top N之内的候选测试用例,作为所述目标测试用例,所述N为预定正整数或者预定比例值;或者基于第一预定测试用例选择策略,根据各个候选测试用例的性能度试值来从所述候选测试用例集中选择所述目标测试用例,所述第一预定测试用例选择策略是基于各个候选测试用例的性能度量值得到的组合选择策略。
可选地,在上述方面的一个示例中,所述测试用例选择单元被配置为将各个候选测试用例的相关性度量值提供给测试用例选择模型来确定目标测试用例。
可选地,在上述方面的一个示例中,所述测试用例选择单元被配置为基于第二预定测试用例选择策略,根据各个候选测试用例的相关性度量值来从所述候选测试用例集中选择目标测试用例,所述第二预定测试用例选择策略是基于各个候选测试用例的相关性度量值得到的组合选择策略。
根据本公开的另一方面,提供一种计算设备,包括:至少一个处理器;以及与所述至少一个处理器耦合的存储器,被配置为存储指令,当所述指令被所述至少一个处理器执行时,使得所述至少一个处理器执行如上所述 的测试用例选择方法。
根据本公开的另一方面,提供一种机器可读存储介质,其存储有可执行指令,所述指令当被执行时使得所述机器执行如上所述的测试用例选择方法。
根据本公开的另一方面,提供一种计算机程序产品,所述计算机程序产品被有形地存储在计算机可读介质上并且包括计算机可执行指令,所述计算机可执行指令在被执行时使至少一个处理器执行如上所述的测试用例选择方法。
附图说明
通过参照下面的附图,可以实现对于本说明书内容的本质和优点的进一步理解。在附图中,类似组件或特征可以具有相同的附图标记。
图1示出了根据本公开的实施例的用于DevOps过程的基于变更相关性分析的测试用例选择方法的示例流程图。
图2示出了根据本公开的实施例的具有微服务架构的应用程序的示例示意图。
图3示出了根据本公开的实施例的分布式跟踪记录的示例示意图。
图4示出了根据本公开的实施例的跨越(span)信息列表的示例示意图。
图5示出了根据本公开的实施例的应用程序中的变更组件所涉及的通信路径的示例示意图。
图6示出了根据本公开的实施例的经过归一化处理后的测试强度的度量值的示例示意图。
图7示出了DevOps过程中应用不同的测试用例选择策略的示例示意图。
图8示出了根据本公开的实施例的用于DevOps过程的基于变更相关性分析的测试用例选择装置的示例方框图。
图9示出了根据本公开的实施例的测试用例选择单元的一个实现示例的方框图。
图10示出了根据本公开的实施例用于实现基于变更相关性分析的测试用例选择过程的计算设备的示意图。
附图标记
100  测试用例选择过程
110  获取针对应用程序执行的历史测试集的分布式跟踪记录
120  获取应用程序变更的代码变化信息
130  根据分布式跟踪记录和代码变化信息,确定各个候选测试用例相对于应用程序变更的相关性度量值
140  根据各个候选测试用例的相关性度量值,确定目标测试用例
800  测试用例选择装置
810  跟踪记录获取单元
820  代码变化信息获取单元
830  相关性度量值确定单元
840  测试用例选择单元
841  性能度量值确定模块
843  测试用例选择模块
1000  计算设备
1010  处理器
1020  存储器
1030  内存
1040  通信接口
1060  总线
具体实施方式
现在将参考示例实施方式讨论本文描述的主题。应该理解,讨论这些实施方式只是为了使得本领域技术人员能够更好地理解从而实现本文描述的主题,并非是对权利要求书中所阐述的保护范围、适用性或者示例的限制。可以在不脱离本说明书内容的保护范围的情况下,对所讨论的元素的功能和排列进行改变。各个示例可以根据需要,省略、替代或者添加各种过程或组件。例如,所描述的方法可以按照与所描述的顺序不同的顺序来执行,以及各个步骤可以被添加、省略或者组合。另外,相对一些示例所描述的特征在其它例子中也可以进行组合。
如本文中使用的,术语“包括”及其变型表示开放的术语,含义是“包括但不限于”。术语“基于”表示“至少部分地基于”。术语“一个实施例”和“一实施例”表示“至少一个实施例”。术语“另一个实施例”表示“至少一个其他实施例”。术语“第一”、“第二”等可以指代不同的或相同的对象。下面可以包括其他的定义,无论是明确的还是隐含的。除非上下文中明确地指明,否则一个术语的定义在整个说明书中是一致的。
在基于DevOps的软件开发过程中,软件测试应该集成到代码开发、代码构建和代码部署过程中,并以持续不断的方式执行,由此使得软件更新可以在一个从开发到构建到部署的连续过程中实现。上述软件测试方式也可以称为连续测试或持续测试。连续测试的正确实施可以确保在DevOps过程中的每一步都对软件质量进行全面评估,从而使快速交付和部署无缺陷的软件版本成为可能。因此,持续测试被广泛地应用于软件开发。
DevOps软件开发的目的是缩短应用程序的交付时间和提升应用程序的发布频度,由此能够投入每个应用程序的发布版本或变更版本的开发时间和工作量也相对有限。然而,由于工业级软件在功能上和架构上都高度复杂,因此即便较小的应用程序变更,针对应用程序进行的测试验证所需要的工作量也会很大。
下面以当前流行的微服务软件架构为例进行说明。一个典型的基于微服务架构的应用程序由一组独立的组件构成。采用这种软件架构的优点很多,例如,组件可以独立部署、易于扩展和支持跨多团队的并行开发等等。然而,这种基于独立可部署组件的划分却给测试验证带来了额外复杂度。尽管这些组件是独立开发的,但它们必须协同工作,例如,需要进行相互交互以处理某个具体的用户任务。因此,要想对软件的整体质量建立足够的信心,仅仅孤立地测试每个组件是不够的,所有组件都必须一起测试。因此,即便是针对应用程序的较小增量代码变更,都需要执行大量的测试验证,例如接口测试、功能测试、性能测试等,实际需要的工作量可能达到惊人的程度。
在实践中,针对应用程序变更,开发团队或是质量保证人员通常倾向于最大化的测试执行策略,即,进行尽可能充分的测试验证。如果测试验证不够充分,则质量缺陷进入生产环境的风险可能会超出合理的容忍度。 这种最大化的测试执行策略虽然比较保险,但也存在明显的缺陷,即,需要花费大量的时间和精力,效率很低。显然,更好的做法是从所有的测试用例中选择部分测试用例来执行,即,只执行那些与应用程序变更相关的测试用例来进行质量验证,而与应用程序变更无关的测试用例可以忽略,从而减少测试验证工作量。
鉴于上述,本公开的实施例提供了一种用于DevOps过程的基于变更相关性分析的测试用例选择方法和测试用例选择装置。利用该测试用例选择方法及测试用例选择装置,获取针对应用程序执行的历史测试用例集的分布式跟踪记录以及应用程序变更的代码变化信息,并且根据分布式跟踪记录和代码变化信息,确定候选测试用例集中的各个候选测试用例相对于应用程序变更的相关性度量值;然后使用各个候选测试用例的相关性度量值,从候选测试用例集中选择与应用程序变更关联程度高的测试用例来执行测试验证,从而可以减少测试所执行的测试用例数量,缩短测试所需时间,提升测试效率。
下面结合附图来详细说明根据本公开的实施例的用于DevOps过程的基于变更相关性分析的测试用例选择方法和测试用例选择装置。
图1示出了根据本公开的实施例的用于DevOps过程的基于变更相关性分析的测试用例选择方法100的示例流程图。
如图1所示,在110,获取针对应用程序执行的历史测试用例集的分布式跟踪记录。在一个示例中,例如可以利用分布式跟踪工具Zipkin、Jaeger、OpenCensus等来获取针对应用程序执行的历史测试用例集的分布式跟踪记录。在本公开中,分布式跟踪记录用于示出测试用例执行时所触发的用户事务在应用程序中的处理流程。在一个示例中,在分布式跟踪记录中,记录有与测试用例执行时所触发的用户事务的处理过程中所涉及的应用程序组件之间的通信路径和交互信息。由此,可以通过检测在分布式跟踪记录中是否存在某个具体应用程序组件,判断该测试用例的执行过程中是否对该应用程序组件进行了检验。然后,通过比较所要验证的应用程序变更中具体涉及到的被变更应用程序组件与测试用例的执行过程期间实际检查的应用程序组件,可以对测试用例与应用程序变更的关联程度进行评估。
图2示出了根据本公开的实施例的具有微服务架构的应用程序的示例 示意图。
如图2所示,该应用程序具有9个服务组件,Service 1到Service 9。每个服务组件独立运作其自有的处理流程,并且与其它服务组件进行通信交互以联合实现具体的业务功能。
图3示出了根据本公开的实施例的分布式跟踪记录的示例示意图。图3中示出的分布式跟踪记录是针对图2中示出的应用程序执行的历史测试用例集的分布式跟踪记录。
如图3中所示,分布式跟踪记录用于示出关于通过执行测试用例而触发的用户事务如何在应用程序中进行处理的整体处理流程,该分布式跟踪记录在分布式链式跟踪领域也可以称为分布式跟踪链路(trace)。分布式跟踪记录(trace)由多个时间跨越(span)组成。此外,图3中的“duration”用于表示分布式跟踪记录的总处理时间,“service number”用于表示分布式跟踪记录中的服务数目,以及“total span”用于表示分布式跟踪记录中的span总数。在本公开中,span用于表示整体处理流程中的个体处理片段。span可以是组件的某个具体的操作所花费的时间周期。在分布式跟踪记录中,一些span与另一些span之间可以存在父子关系。例如,如图3中所示,span 1具有两个子span,span 2和span 4。在一个操作将数据传递到另一操作或者使用另一操作提供的功能时,该两个操作所对应的span之间存在父子关系。换言之,如果父span在另一不同的应用程序组件中存在子span,则意味着在这两个应用程序组件之间发生了交互,例如,由一个应用程序组件发送的请求调用另一应用程序组件中的操作。通过解析分布式跟踪记录中的span以及它们之间的关系,可以推导出在测试执行期间不同应用程序组件之间的通信路径、调用关系和交互信息。图4示出了根据本公开的实施例的span信息列表的示例示意图。图4中示出的span信息列表根据图3中示出的分布式跟踪记录导出,以及图5示出了根据本公开的实施例的应用程序中的被变更应用程序组件所涉及的通信路径的示例示意图。
回到图1中,在120,获取应用程序变更的代码变化信息。在应用程序开发环境中,应用程序的源代码被存储在配置管理数据库中,并且可以利用例如ClearCase、Subversion、Git等软件配置管理工具来从配置管理数据库中直接获取某次应用程序变更的代码变化信息,例如,应用程序中的哪 些部分被变更。例如,在图2示出的示例中,在最近的应用程序变更中,对三个组件Service 1、Service 5和Service 9进行了变更,例如,图2中以符号“!”示出的服务组件。
在130,根据分布式跟踪记录和代码变化信息,确定候选测试用例集中的各个候选测试用例相对于应用程序变更的相关性度量值。在本公开的一个示例中,相关性度量值的示例可以包括但不限于测试覆盖度、测试频度、测试效率及其任何组合。换言之,需要从测试覆盖度、测试频度和测试效率三个层面来评估测试用例与应用程序变更之间的相关性(即,关联度)。
在本公开中,术语“测试覆盖度”用于反映应用程序变更部分是否能够被待评估的测试用例实际检验。换言之,如果测试用例与应用程序变更相关,则在该测试用例执行期间,应用程序的已变更部分应该被检查到。在一个示例中,测试覆盖度的示例可以包括组件覆盖度和/或通信路径覆盖度。
术语“组件覆盖度”用于量化地反映待评估的测试用例所做的检查对已变更应用程序组件的测试覆盖度。组件覆盖度的值越大,则测试用例对已变更应用程序组件的测试覆盖度越高,即,测试用例与该次应用程序变更的关联性越大。组件覆盖度CCov可以使用下述等式(1)来计算出:
CCov=N 跟踪路径/N total   (1)
其中,N 跟踪路径表示在分布式跟踪记录中出现的已变更应用程序组件数目,以及N total表示应用程序中的已变更应用程序组件的总数。
在图2中示出的示例中,存在3个已变更应用程序组件,Service 1、Service 5和Service 9。此外,从图3和图4中示出的分布式跟踪记录中可知,在分布式跟踪记录中出现的已变更应用程序组件包括Service 1和Service 9,即,在分布式跟踪记录中出现的已变更应用程序组件数目为2,由此CCov=2/3=66.67%。
在进行关联性分析时,只考虑一种覆盖率度量不够全面。在应用程序中,当两个应用程序组件进行交互时会存在通信路径。一个应用程序组件通常存在多个与之相关的通信路径。因此,如果一个应用程序组件被变更,则需要验证所有与该应用程序组件相关的通信路径是否都能正常工作。在 这种情况下,在进行测试用例与应用程序变更的关联性分析时,还可以考虑通信路径覆盖度。术语“通信路径覆盖度”用于反映在测试用例执行期间实际检查到的与已变更应用程序组件相关的通信路径相对于与已变更应用程序组件相关的所有通信路径的占比。通信路径覆盖度的值越大,则该测试用例与该次应用程序变更的关联性越大。通信路径覆盖度CPCov可以使用下述等式(2)来计算出:
CPCov=P 跟踪路径/P total   (2)
其中,P 跟踪路径表示在分布式跟踪记录中出现的与已变更应用程序组件有关的通信路径的数目,以及P total表示与已变更应用程序组件有关的通信路径的总数。
在图2中示出的示例中,应用程序中存在3个已变更应用程序组件,Service 1、Service 5和Service 9。此外,从图5中示出的分布式跟踪记录中出现的通信路径信息中可知,在分布式跟踪记录中可以找到的与已变更应用程序组件有关的通信路径包括5条,而与已变更应用程序组件有关的通信路径总数为13,由此CPCov=5/13=38.46%。
在进行测试用例与应用程序变更之间的相关性分析时,还可以考虑如何评估错误检测能力。测试用例与应用程序变更相关,意味着该测试用例具有发现该应用程序变更中隐藏的缺陷的错误检测能力。直接度量这种错误检测能力很困难,但是可以利用测试强度来表征该错误检测能力。在分布式跟踪记录中,如果用户事务的处理流程使用某个应用程序组件中的操作,则会存在一个span,由此,可以通过计数调用已变更应用程序组件的span,得到测试用例执行期间已变更应用程序组件的调用次数。根据上述讨论,在本公开中,可以将测试强度TInt定义为与已变更应用程序组件有关的span的数目。在图2中示出的示例中,存在三个与已变更应用程序组件有关的span,由此测试强度Tint的值为3。测试强度Tint的值越大,测试用例与应用程序变更之间的相关性越大。
此外,在进行测试用例与应用程序变更之间的相关性分析时,还可以考虑测试效率。在本公开的一个示例中,测试效率的示例可以包括但不限于时间效率TEff和工作效率EEff。
术语“时间效率TEff”利用已变更应用程序组件所花费的时间在分布式跟踪记录的总处理时间中的占比来表示。时间效率TEff可以利用下述公式(3)来计算出:
TEff=T 已变更组件/T total   (3)
其中,T 已变更组件表示已变更应用程序组件所花费的时间,以及T total表示分布式跟踪记录的总处理时间。
在图2中示出的示例中,TEff=(span2+span4+span6)/(span1+span2+span3+span4+span5+span6+span7+span8+span9),由此得到TEff=35.12%。
术语“工作效率EEff”利用与已变更应用程序组件有关的span数目与分布式跟踪记录中的span总数的比值来表示。工作效率EEff可以利用下述公式(4)来计算出:
EEff=S 已变更组件/S total   (4)
其中,S 已变更组件表示与已变更应用程序组件有关的span数目,以及S total表示分布式跟踪记录中的span总数。
在图2中示出的示例中,与已编程应用程序组件有关的span数目为3,以及分布式跟踪记录中的span总数为9,由此EEff=3/9=33.33%。
在如上得到各个候选测试用例的相关性度量值后,在140,根据各个候选测试用例的相关性度量值,从候选测试用例集中选择目标测试用例。
可选地,在本公开的一个示例中,根据各个候选测试用例的相关性度量值,从候选测试用例集中选择目标测试用例可以包括:根据各个候选测试用例的相关性度量值,确定各个候选测试用例的性能度量值;以及根据各个候选测试用例的性能度量值,从候选测试用例集中选择目标测试用例。
在一个示例中,针对每个候选测试用例,可以使用各个候选测试用例的相关性度量值以及各自的加权权重,确定各个候选测试用例的性能度量值。即,P(f)=W CCov*CCov+W CPCov*CPCov+W TInt*TInt+W TEff*TEff+W EEff*EEff。这里,各个加权权重的取值为0到1之间的小数值,并且所有加权权重之和等于1。
要说明的是,在上面的加权权重分配方案中,在一个示例中,可以首先将加权权重分配给测试覆盖度、测试强度和测试效率度量,然后,在某项相关性度量有多种计算指标的情况下,例如,测试覆盖度有组件覆盖度和通信路径覆盖度两种指标,以及测试效率具有时间效率和工作效率两种指标,则可以将分配给测试覆盖度的加权权重再次进行分配,即将再次分配后的加权权重分配给组件覆盖度和通信路径覆盖度指标。同样,将分配给测试效率的加权权重再次进行分配,即将再次分配后的加权权重分配给时间效率和工作效率。例如,可以将加权权重平均分配给测试覆盖度、测试强度和测试效率三项相关性度量指标,由此各自得到的加权权重为1/3,然后将分配给测试覆盖度和测试效率的加权权重1/3再次平均分配,由此得到组件覆盖度、通信路径覆盖度、时间效率和工作效率的加权权重为1/6。最终在计算性能度量值时,组件覆盖度、通信路径覆盖度、时间效率和工作效率所获得的加权权重分别为1/6,以及测试强度的加权权重为1/3。
此外,要说明的是,可选地,在一个示例中,为了使得在计算性能测量值时所有相关性度量值都采用相同的取值范围,需要对相关性度量值进行归一化处理。在上面的示例中,除了TInt之外,其余相关性度量值的取值范围都在[0,1]之间,从而仅仅需要对Tint进行归一化处理。图6示出了根据本公开的实施例的经过归一化处理后的测试强度的度量值的示例示意图。在图6中示出的示图中,标签“AVERAGE”表示利用所有候选测试用例计算出的TInt值的平均值。然后,利用归一化后的相关性度量值来计算性能度量值。由此,通过使用归一化后的相关性度量值来确定测试用例与应用程序变更之间的关联度,可以消除由于相关性度量值的度量单位不同而引入的不利影响,由此提升关联度评估分析的精度。
此外,可选地,在根据各个候选测试用例的性能度量值,从候选测试用例集中选择目标测试用例时,在一个示例中,可以从候选测试用例集中选择性能度量值高于预定阈值的候选测试用例,作为目标测试用例。例如,可以将性能度量值大于0.5的候选测试用例确定为目标测试用例。在另一示例中,可以从候选测试用例集中选择性能度量值位于Top N之内的候选测试用例,作为目标测试用例,所述N为预定正整数或者预定比例值。例如,可以将排在Top10的10个候选测试用例确定为目标测试用例,或者将排在 Top 20%的候选测试用例确定为目标测试用例。在另一示例中,可以基于第一预定测试用例选择策略,根据各个候选测试用例的性能测试值来从候选测试用例集中选择目标测试用例。这里,第一预定测试用例选择策略是基于各个候选测试用例的性能测量值得到的组合选择策略。例如,第一预定测试用例选择策略的示例可以是((P(f),Top(20%))AND(P(f),HigherThan(0.5))),表示从候选测试用例中选择性能度量值排在Top20%并且性能度量值大于0.5的测试用例作为目标测试用例。要说明的是,上面示出的仅仅是第一预定测试用例选择策略的示例,在本说明书中可以采用其它合适的组合选择策略。
此外,可选地,在本公开的一个示例中,在根据各个候选测试用例的相关性度量值,从候选测试用例集中选择目标测试用例时,还可以将各个候选测试用例的相关性度量值提供给测试用例选择模型来确定目标测试用例。这里,测试用例选择模型可以利用基于历史数据得到的相关性度量值作为模型训练数据训练出。通过将基于历史数据得到的相关性度量值作为测试用例选择模型的模型训练数据来训练出测试用例选择模型,并且使用训练出的测试用例选择模型来选择目标测试用例,可以提升目标测试用例确定的精度。
此外,可选地,在本公开的一个示例中,在根据各个候选测试用例的相关性度量值,从候选测试用例集中选择目标测试用例时,可以基于第二预定测试用例选择策略,根据各个候选测试用例的相关性度量值来从候选测试用例集中选择目标测试用例。这里,第二预定测试用例选择策略可以是基于各个候选测试用例的相关性度量值得到的组合选择策略。例如,第二预定测试用例选择策略的示例可以是((TInt,HigherThan(AVERAGE×150%))OR((CCov,TopValues(30%))AND(CPCov,TopValues(30%)))),表示从候选测试用例中选择测试强度值大于所有测试用例测试强度平均值的1.5倍,以及组件覆盖度和通信路径覆盖度均排在Top30%的测试用例作为目标测试用例。要说明的是,上面示出的仅仅是第二预定测试用例选择策略的示例,在本说明书中可以采用其它合适的组合选择策略。
如上描述了多种测试用例选择策略,可以在DevOps过程中根据不同的处理阶段和质量验证目标灵活地实现不同的测试用例选择策略。这种灵活 性极大地方便了DevOps过程中持续测试的实现。
图7示出了DevOps过程中应用不同的测试用例选择策略的示例示意图。
在图7的示例中,DevOps过程包括三种级别的测试,即,针对开发分支(developer branch)的测试、针对特征分支(feature branch)的测试以及针对发布分支(release branch)的测试。不同级别的测试,对应的质量验证关注点不同,相应地,所采用的测试用例选择策略也会不同。
developer branch是应用程序代码库的本地副本,作用是供软件开发人员提交其代码变更。针对developer branch的测试旨在为开发人员提供快速的质量反馈,例如,在开发人员对应用程序进行变更后,应用程序是否能够按照预期的行为工作。也就是说,测试验证的关注点是快速检测应用程序中是否存在缺陷。由此,可以选择基于性能度量值的测试用例选择策略。例如,为测试覆盖率、测试强度和测试效率分配加权权重0.2、0.4、0.4,加权权重表示性能评估中的优先级。然后,在计算出所有的性能度量值之后,选择性能度量值最好的20%的测试用例作为目标测试用例,即,测试用例选择策略为(P(f),TopValues(20%),其中,性能度量值计算时测试覆盖率、测试强度和测试效率所采用的加权权重分别为0.2、0.4、0.4。
feature branch由几个共同完成某项任务的开发人员共享。每当开发人员认为自己的工作完成时,其对应的developer branch中的代码变更都将提交到feature branch。针对feature branch的测试应该确保不同开发人员所做应用程序变更之间的协调一致,并在某个开发人员提交的代码变更导致软件出现整体故障时及时提供反馈。由此,在考虑feature branch测试的测试用例选择标准时,测试覆盖率和测试强度更为重要。同时,开发人员向feature branch提交代码变更的频率通常小于在自己的developer branch中的变更频率,因此相对于developer branch中的测试用例选择,可以放宽针对测试效率的要求。在这种情况下,同样可以使用基于性能度量值的测试用例选择策略。但是,需要调整分配给测试覆盖率、测试强度和测试效率的加权权重,例如,将分配给测试覆盖率、测试强度和测试效率的加权权重调整为0.4、0.4、0.2。例如,测试用例选择策略可以为(P(f),TopValues(50%),其中,性能度量值计算时测试覆盖率、测试强度和测试效率所采用的加权权重分别为0.4、0.4、0.2。
release branch是正式软件发布的基础。在所有工作完成后,feature branch中的代码变更将最终合并到这里。针对release branch的测试旨在最大化与代码变更相关的故障检测能力。由此,可以使用下述测试用例选择策略:(CCov,HigherThan(0))。这意味着,如果一个测试用例可以检查至少一个被更改的组件,就应被选择为目标测试用例。也就是说,在针对release branch的测试中,希望运行所有可以检查被变更组件的测试用例,而不考虑其测试强度和测试效率的相关性度量值,从而实现软件缺陷检测能力最大化的目的。
如上参照图1到图7描述了根据本公开的实施例的用于DevOps过程的基于变更相关性分析的测试用例选择方法。
利用上述方法,通过根据历史测试用例的分布式跟踪记录以及应用程序变更的代码变化信息来确定各个历史测试用例与应用程序变更的相关性度量值,并使用各个历史测量用例的相关性度量值来选择目标测试用例,可以仅仅使用与应用程序变更相关性高的测试用例执行测试验证,从而可以减少测试所执行的测试用例数量,缩短测试所需时间,同时提升测试效率。
利用上述方法,通过将测试覆盖度、测试强度和测试效率用作相关性度量值,可以更为准确地反映测试用例与应用程序变更之间的相关性,由此可以更为准确地选择出相关性高的测试用例,从而进一步提升测试效率。
利用上述方法,通过将组件覆盖度和/或通信路径覆盖度指标用作测试覆盖度评估、以及将时间效率和/或工作效率指标用作测试效率评估,可以更为准确地确定出测试用例与应用程序变更之间的相关性,由此提高测试用例的相关性度量值确定的精度,从而进一步提升测试效率。
利用上述方法,通过为相关性度量值赋予不同的加权权重,可以使得目标测试用例的选择更加符合应用场景或具体需求。
利用上述方法,通过使用不同的测试选择策略来执行目标测试用例选择,可以使得所选择出的目标测试用例可以适应于不同的应用场景,例如,具有不同应用环境、不同测试目的或者不同应用程序要求的应用场景。
利用上述方法,通过使用基于相关性度量值得到的组合策略来选择目标测试用例,可以使得目标测试用例的选择更加简便和高效。
图8示出了根据本公开的实施例的用于DevOps过程的基于变更相关性分析的测试用例选择装置800的示例方框图。如图8所示,测试用例选择装置800包括跟踪记录获取单元810、代码变化信息获取单元820、相关性度量值确定单元830和测试用例选择单元840。
跟踪记录获取单元810被配置为获取针对应用程序执行的历史测试用例集的分布式跟踪记录,所述分布式跟踪记录用于示出测试用例执行时触发的用户事务在应用程序中的处理流程。跟踪记录获取单元810的操作可以参考上面参照图1描述的110的操作。
代码变化信息获取单元820被配置为获取应用程序变更的代码变化信息。代码变化信息获取单元820的操作可以参考上面参照图1描述的120的操作。
相关性度量值确定单元830被配置为根据分布式跟踪记录和代码变化信息,确定候选测试用例集中的各个候选测试用例相对于应用程序变更的相关性度量值。相关性度量值确定单元830的操作可以参考上面参照图1描述的130的操作。
测试用例选择单元840被配置为根据各个候选测试用例的相关性度量值,从候选测试用例集中选择目标测试用例。测试用例选择单元840的操作可以参考上面参照图1描述的140的操作。
此外,可选地,在一个示例中,所述相关性度量值包括下述度量值中的至少一个:测试覆盖度、测试强度和测试效率。
此外,可选地,在一个示例中,所述测试覆盖度包括组件覆盖度和/或通信路径覆盖度,和/或所述测试效率包括时间效率和/或工作效率。
图9示出了根据本公开的实施例的测试用例选择单元840的一个实现示例的方框图。如图9所示,测试用例选择单元840包括性能度量值确定模块841和测试用例选择模块843。
性能度量值确定模块841被配置为根据各个候选测试用例的相关性度量值,确定各个候选测试用例的性能度量值。然后,测试用例选择模块843根据各个候选测试用例的性能度量值,从候选测试用例集中选择目标测试用例。
此外,可选地,在一个示例中,各个相关性度量值具有加权权重。相 应地,性能测量值确定模块841被配置为使用各个候选测试用例的相关性度量值以及各自的加权权重,确定各个候选测试用例的性能测量值。
此外,可选地,在一个示例中,测试用例选择模块843被配置为从候选测试用例集中选择性能度量值高于预定阈值的候选测试用例,作为目标测试用例。在另一示例中,测试用例选择模块843被配置为从候选测试用例集中选择性能测量值位于Top N之内的候选测试用例,作为目标测试用例,所述N为预定正整数或者预定比例值。此外,在另一示例中,测试用例选择模块843被配置为基于第一预定测试用例选择策略,根据各个候选测试用例的性能度量值来从候选测试用例集中选择目标测试用例,第一预定测试用例选择策略是基于各个候选测试用例的性能度量值得到的组合选择策略。
此外,可选地,在一个示例中,测试用例选择单元840被配置为将各个候选测试用例的相关性度量值提供给测试用例选择模型来确定目标测试用例。
此外,可选地,在一个示例中,测试用例选择单元840被配置为基于第二预定测试用例选择策略,根据各个候选测试用例的相关性度量值来从候选测试用例集中选择目标测试用例,第二预定测试用例选择策略是基于各个候选测试用例的相关性度量值得到的组合选择策略。
如上参照图1到图9,对根据本公开的实施例的测试用例选择方法和测试用例选择装置进行了描述。上面的测试用例选择装置可以采用硬件实现,也可以采用软件或者硬件和软件的组合来实现。
图10示出了根据本公开的实施例用于实现基于变更相关性分析的测试用例选择过程的计算设备1000的示意图。如图10所示,计算设备1000可以包括至少一个处理器1010、存储器(例如,非易失性存储器)1020、内存1030和通信接口1040,并且至少一个处理器1010、存储器1020、内存1030和通信接口1040经由总线1060连接在一起。至少一个处理器1010执行在存储器中存储或编码的至少一个计算机可读指令(即,上述以软件形式实现的元素)。
在一个实施例中,在存储器中存储计算机可执行指令,其当执行时使得至少一个处理器1010:获取针对应用程序执行的历史测试用例集的分布 式跟踪记录以及应用程序变更的代码变化信息,所述分布式跟踪记录用于示出测试用例执行时触发的用户事务在应用程序中的处理流程;根据分布式跟踪记录和代码变化信息,确定候选测试用例集中的各个候选测试用例相对于应用程序变更的相关性度量值;以及根据各个候选测试用例的相关性度量值,从候选测试用例集中选择目标测试用例。
应该理解,在存储器中存储的计算机可执行指令当执行时使得至少一个处理器1010进行本说明书的各个实施例中以上结合图1-9描述的各种操作和功能。
根据一个实施例,提供了一种比如机器可读介质(例如,非暂时性机器可读介质)的程序产品。机器可读介质可以具有指令(即,上述以软件形式实现的元素),该指令当被机器执行时,使得机器执行本说明书的各个实施例中以上结合图1-9描述的各种操作和功能。具体地,可以提供配有可读存储介质的系统或者装置,在该可读存储介质上存储着实现上述实施例中任一实施例的功能的软件程序代码,且使该系统或者装置的计算机或处理器读出并执行存储在该可读存储介质中的指令。
在这种情况下,从可读介质读取的程序代码本身可实现上述实施例中任何一项实施例的功能,因此机器可读代码和存储机器可读代码的可读存储介质构成了本发明的一部分。
可读存储介质的实施例包括软盘、硬盘、磁光盘、光盘(如CD-ROM、CD-R、CD-RW、DVD-ROM、DVD-RAM、DVD-RW、DVD-RW)、磁带、非易失性存储卡和ROM。可选择地,可以由通信网络从服务器计算机上或云上下载程序代码。
本领域技术人员应当理解,上面公开的各个实施例可以在不偏离发明实质的情况下做出各种变形和修改。因此,本发明的保护范围应当由所附的权利要求书来限定。
需要说明的是,上述各流程和各系统结构图中不是所有的步骤和单元都是必须的,可以根据实际的需要忽略某些步骤或单元。各步骤的执行顺序不是固定的,可以根据需要进行确定。上述各实施例中描述的装置结构可以是物理结构,也可以是逻辑结构,即,有些单元可能由同一物理实体实现,或者,有些单元可能分由多个物理实体实现,或者,可以由多个独 立设备中的某些部件共同实现。
以上各实施例中,硬件单元或模块可以通过机械方式或电气方式实现。例如,一个硬件单元、模块或处理器可以包括永久性专用的电路或逻辑(如专门的处理器,FPGA或ASIC)来完成相应操作。硬件单元或处理器还可以包括可编程逻辑或电路(如通用处理器或其它可编程处理器),可以由软件进行临时的设置以完成相应操作。具体的实现方式(机械方式、或专用的永久性电路、或者临时设置的电路)可以基于成本和时间上的考虑确定。
上面结合附图阐述的具体实施方式描述了示例性实施例,但并不表示可以实现的或者落入权利要求书的保护范围的所有实施例。在整个本说明书中使用的术语“示例性”意味着“用作示例、实例或例示”,并不意味着比其它实施例“优选”或“具有优势”。出于提供对所描述技术的理解的目的,具体实施方式包括具体细节。然而,可以在没有这些具体细节的情况下实施这些技术。在一些实例中,为了避免对所描述的实施例的概念造成难以理解,公知的结构和装置以框图形式示出。
本公开内容的上述描述被提供来使得本领域任何普通技术人员能够实现或者使用本公开内容。对于本领域普通技术人员来说,对本公开内容进行的各种修改是显而易见的,并且,也可以在不脱离本公开内容的保护范围的情况下,将本文所定义的一般性原理应用于其它变型。因此,本公开内容并不限于本文所描述的示例和设计,而是与符合本文公开的原理和新颖性特征的最广范围相一致。

Claims (20)

  1. 一种用于研发运维一体化过程的基于变更相关性分析的测试用例选择方法(100),包括:
    获取针对应用程序执行的历史测试用例集的分布式跟踪记录(110)以及应用程序变更的代码变化信息(120),所述分布式跟踪记录用于示出测试用例执行时触发的用户事务在所述应用程序中的处理流程;
    根据所述分布式跟踪记录和所述代码变化信息,确定(130)候选测试用例集中的各个候选测试用例相对于所述应用程序变更的相关性度量值;以及
    根据各个候选测试用例的相关性度量值,从所述候选测试用例集中选择(140)目标测试用例。
  2. 如权利要求1所述的测试用例选择方法(100),其中,所述相关性度量值包括下述度量值中的至少一个:测试覆盖度、测试强度和测试效率。
  3. 如权利要求2所述的测试用例选择方法(100),其中,所述测试覆盖度包括组件覆盖度和/或通信路径覆盖度,和/或
    所述测试效率包括时间效率和/或工作效率。
  4. 如权利要求2所述的测试用例选择方法(100),其中,所述相关性度量值包括归一化后的相关性度量值。
  5. 如权利要求1到4中任一所述的测试用例选择方法(100),其中,根据各个候选测试用例的相关性度量值,从所述候选测试用例集中选择(140)目标测试用例包括:
    根据各个候选测试用例的相关性度量值,确定各个候选测试用例的性能度量值;以及
    根据各个候选测试用例的性能度量值,从所述候选测试用例集中选择目标测试用例。
  6. 如权利要求5所述的测试用例选择方法(100),其中,各个相关性度量值具有加权权重,
    根据各个候选测试用例的相关性度量值,确定各个候选测试用例的性能度量值包括:
    根据各个候选测试用例的相关性度量值以及各自的加权权重,确定各个候选测试用例的性能度量值。
  7. 如权利要求5所述的测试用例选择方法(100),其中,根据各个候选测试用例的性能度量值,从所述候选测试用例集中选择目标测试用例包括:
    从所述候选测试用例集中选择性能度量值高于预定阈值的候选测试用例,作为所述目标测试用例;
    从所述候选测试用例集中选择性能度量值位于Top N之内的候选测试用例,作为所述目标测试用例,所述N为预定正整数或者预定比例值;或者
    基于第一预定测试用例选择策略,根据各个候选测试用例的性能度量值来从所述候选测试用例集中选择所述目标测试用例,所述第一预定测试用例选择策略是基于各个候选测试用例的性能度量值得到的组合选择策略。
  8. 如权利要求1到4中任一所述的测试用例选择方法(100),其中,根据各个候选测试用例的相关性度量值,从所述候选测试用例集中选择(140)目标测试用例包括:
    将各个候选测试用例的相关性度量值提供给测试用例选择模型来确定目标测试用例。
  9. 如权利要求1到4中任一所述的测试用例选择方法(100),其中,根据各个候选测试用例的相关性度量值,从所述候选测试用例集中选择(140)目标测试用例包括:
    基于第二预定测试用例选择策略,根据各个候选测试用例的相关性度 量值来从所述候选测试用例集中选择目标测试用例,所述第二预定测试用例选择策略是基于各个候选测试用例的相关性度量值得到的组合选择策略。
  10. 一种用于研发运维一体化过程的基于变更相关性分析的测试用例选择装置(800),包括:
    跟踪记录获取单元(810),被配置为获取针对应用程序执行的历史测试用例集的分布式跟踪记录,所述分布式跟踪记录用于示出测试用例执行时触发的用户事务在所述应用程序中的处理流程;
    代码变化信息获取单元(820),被配置为获取应用程序变更的代码变化信息;
    相关性度量值确定单元(830),被配置为根据所述分布式跟踪记录和所述代码变化信息,确定候选测试用例集中的各个候选测试用例相对于所述应用程序变更的相关性度量值;以及
    测试用例选择单元(840),被配置为根据各个候选测试用例的相关性度量值,从所述候选测试用例集中选择目标测试用例。
  11. 如权利要求10所述的测试用例选择装置(800),其中,所述相关性度量值包括下述度量值中的至少一个:测试覆盖度、测试强度和测试效率。
  12. 如权利要求11所述的测试用例选择装置(800),其中,所述测试覆盖度包括组件覆盖度和/或通信路径覆盖度,和/或
    所述测试效率包括时间效率和/或工作效率。
  13. 如权利要求10到12中任一所述的测试用例选择装置(800),其中,所述测试用例选择单元(840)包括:
    性能测量值确定模块(841),被配置为根据各个候选测试用例的相关性度量值,确定各个候选测试用例的性能度量值;以及
    测试用例选择模块(843),被配置为根据各个候选测试用例的性能度量值,从所述候选测试用例集中选择目标测试用例。
  14. 如权利要求13所述的测试用例选择装置(800),其中,各个相关性度量值具有加权权重,
    所述性能度量值确定模块(841)被配置为根据各个候选测试用例的相关性度量值以及各自的加权权重,确定各个候选测试用例的性能度量值。
  15. 如权利要求13所述的测试用例选择装置(800),其中,所述测试用例选择模块(843)被配置为:
    从所述候选测试用例集中选择性能度量值高于预定阈值的候选测试用例,作为所述目标测试用例;
    从所述候选测试用例集中选择性能度量值位于Top N之内的候选测试用例,作为所述目标测试用例,所述N为预定正整数或者预定比例值;或者
    基于第一预定测试用例选择策略,根据各个候选测试用例的性能度量值来从所述候选测试用例集中选择所述目标测试用例,所述第一预定测试用例选择策略是基于各个候选测试用例的性能度量值得到的组合选择策略。
  16. 如权利要求10到12中任一所述的测试用例选择装置(800),其中,所述测试用例选择单元(840)被配置为将各个候选测试用例的相关性度量值提供给测试用例选择模型来确定目标测试用例。
  17. 如权利要求10到12中任一所述的测试用例选择装置(800),其中,所述测试用例选择单元(840)被配置为基于第二预定测试用例选择策略,根据各个候选测试用例的相关性度量值来从所述候选测试用例集中选择目标测试用例,所述第二预定测试用例选择策略是基于各个候选测试用例的相关性度量值得到的组合选择策略。
  18. 一种计算设备(1000),包括:
    至少一个处理器(1010);以及
    与所述至少一个处理器(1010)耦合的存储器(1020),被配置为存储 指令,当所述指令被所述至少一个处理器(1010)执行时,使得所述至少一个处理器(1010)执行如权利要求1到9中任一所述的方法。
  19. 一种机器可读存储介质,其存储有可执行指令,所述指令当被执行时使得所述机器执行如权利要求1到9中任一所述的方法。
  20. 一种计算机程序产品,所述计算机程序产品被有形地存储在计算机可读介质上并且包括计算机可执行指令,所述计算机可执行指令在被执行时使至少一个处理器执行如权利要求1到9中任一所述的方法。
PCT/CN2020/117921 2020-09-25 2020-09-25 基于变更相关性分析的测试用例选择方法及装置 WO2022061779A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2020/117921 WO2022061779A1 (zh) 2020-09-25 2020-09-25 基于变更相关性分析的测试用例选择方法及装置
EP20954631.6A EP4202689A4 (en) 2020-09-25 2020-09-25 METHOD AND APPARATUS FOR SELECTING A TEST CASE BASED ON CHANGE CORRELATION ANALYSIS
CN202080104035.3A CN116097227A (zh) 2020-09-25 2020-09-25 基于变更相关性分析的测试用例选择方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/117921 WO2022061779A1 (zh) 2020-09-25 2020-09-25 基于变更相关性分析的测试用例选择方法及装置

Publications (1)

Publication Number Publication Date
WO2022061779A1 true WO2022061779A1 (zh) 2022-03-31

Family

ID=80847058

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/117921 WO2022061779A1 (zh) 2020-09-25 2020-09-25 基于变更相关性分析的测试用例选择方法及装置

Country Status (3)

Country Link
EP (1) EP4202689A4 (zh)
CN (1) CN116097227A (zh)
WO (1) WO2022061779A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117520211A (zh) * 2024-01-08 2024-02-06 江西财经大学 基于多维覆盖矩阵的随机组合测试用例生成方法与系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170046245A1 (en) * 2015-08-13 2017-02-16 Ca, Inc. Method and Apparatus for Recommending Regression Tests
CN107515826A (zh) * 2017-08-28 2017-12-26 广州阿里巴巴文学信息技术有限公司 测试用例精准推荐方法、装置、系统、设备及存储介质
CN108427637A (zh) * 2018-01-18 2018-08-21 平安科技(深圳)有限公司 测试案例推荐方法、电子装置及可读存储介质
CN110413506A (zh) * 2019-06-19 2019-11-05 平安普惠企业管理有限公司 测试用例推荐方法、装置、设备及存储介质
CN111209217A (zh) * 2020-03-12 2020-05-29 苏州浪潮智能科技有限公司 系统软件功能测试方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170046245A1 (en) * 2015-08-13 2017-02-16 Ca, Inc. Method and Apparatus for Recommending Regression Tests
CN107515826A (zh) * 2017-08-28 2017-12-26 广州阿里巴巴文学信息技术有限公司 测试用例精准推荐方法、装置、系统、设备及存储介质
CN108427637A (zh) * 2018-01-18 2018-08-21 平安科技(深圳)有限公司 测试案例推荐方法、电子装置及可读存储介质
CN110413506A (zh) * 2019-06-19 2019-11-05 平安普惠企业管理有限公司 测试用例推荐方法、装置、设备及存储介质
CN111209217A (zh) * 2020-03-12 2020-05-29 苏州浪潮智能科技有限公司 系统软件功能测试方法、装置、设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4202689A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117520211A (zh) * 2024-01-08 2024-02-06 江西财经大学 基于多维覆盖矩阵的随机组合测试用例生成方法与系统

Also Published As

Publication number Publication date
EP4202689A1 (en) 2023-06-28
EP4202689A4 (en) 2024-05-22
CN116097227A (zh) 2023-05-09

Similar Documents

Publication Publication Date Title
Jamil et al. Software testing techniques: A literature review
US8386851B2 (en) Functional coverage using combinatorial test design
US7757125B2 (en) Defect resolution methodology and data defects quality/risk metric model extension
US8719789B2 (en) Measuring coupling between coverage tasks and use thereof
US20050015675A1 (en) Method and system for automatic error prevention for computer software
US20160321586A1 (en) Selecting tests for execution on a software product
US8397104B2 (en) Creation of test plans
Kapur et al. When to release and stop testing of a software
WO2022061779A1 (zh) 基于变更相关性分析的测试用例选择方法及装置
Mogyorodi Requirements-based testing: an overview
Santos-Neto et al. Requirements for information systems model-based testing
Nazir et al. Testability estimation framework
Cordeiro et al. Benchmarking of Java verification tools at the software verification competition (SV-COMP)
CN115629956A (zh) 一种基于接口自动化测试的软件缺陷管理方法及系统
Ponaraseri et al. Using the planning game for test case prioritization
Gupta et al. Pragmatic approach for managing technical debt in legacy software project
Kapur et al. Should software testing continue after release of a software: a new perspective
Reddy et al. The role of verification and validation in software testing
Tomar et al. The Survey of Metrices on Software Quality Assurance and Reuse
Becker et al. A Testing Pipeline for Quantum Computing Applications
JP2014071775A (ja) システム開発支援装置およびシステム開発支援方法
Brüseke et al. Palladio-based performance blame analysis
Tandon et al. RELEASE PLANNING PROBLEM WITH TESTING COVERAGE AND FAULT REDUCTION FACTOR UNDER IMPERFECT DEBUGGING.
El Mandouh Application of six-sigma DMAIC methodology in the evaluation of test effectiveness: A case study for EDA tools
Wijendra et al. Software Complexity Reduction through the Process Automation in Software Development Life Cycle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20954631

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020954631

Country of ref document: EP

Effective date: 20230321

NENP Non-entry into the national phase

Ref country code: DE