CN117033226A - Evaluation method for automatic test effect - Google Patents

Evaluation method for automatic test effect Download PDF

Info

Publication number
CN117033226A
CN117033226A CN202311050418.9A CN202311050418A CN117033226A CN 117033226 A CN117033226 A CN 117033226A CN 202311050418 A CN202311050418 A CN 202311050418A CN 117033226 A CN117033226 A CN 117033226A
Authority
CN
China
Prior art keywords
test
automatic test
data
automated
automated test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311050418.9A
Other languages
Chinese (zh)
Inventor
张丽丽
陈晖�
许铭
林真
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jiahejingwei Electronic Technology Ltd
Original Assignee
Shenzhen Jiahejingwei Electronic Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jiahejingwei Electronic Technology Ltd filed Critical Shenzhen Jiahejingwei Electronic Technology Ltd
Priority to CN202311050418.9A priority Critical patent/CN117033226A/en
Publication of CN117033226A publication Critical patent/CN117033226A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3676Test management for coverage analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • General Engineering & Computer Science (AREA)
  • Development Economics (AREA)
  • Computer Hardware Design (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Strategic Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention relates to the technical field of computer application, and discloses an evaluation method of an automatic test effect, which comprises the following steps of S1, determining an evaluation aspect: determining key aspects of an automatic test to be evaluated, testing and evaluating coverage rate and reliability of the automatic test, and S2, establishing a base line: recording the performance level and the overall detection effect of the automatic test under the normal state so as to compare and evaluate the improved effect in the later period, and S3, defining targets and indexes: the method and the device have the advantages that the method for redefining the targets and the indexes of the software or the equipment for automatic testing is used for redefining the targets and the indexes through grading and reinforcing qualitative analysis, judgment is not carried out purely through coverage rate, important parts of the test which are not covered can be found, and detection accuracy in the automatic testing process is improved.

Description

Evaluation method for automatic test effect
Technical Field
The invention relates to the technical field of computer application, in particular to an evaluation method of an automatic test effect.
Background
In general, an automated test is a process of converting a test behavior driven by human into a machine execution, and performing the test by the machine execution, so that the manual test is replaced, the cost of the project is reduced, and the overall efficiency of the test is improved.
In the process of automatic test, targets and test indexes are required to be defined before the test, so that a machine can conveniently carry out comparison detection through the designated targets, the purpose of automatic test is achieved, the test work can be carried out in a large scale, when the machine relies on the quantity indexes to carry out the detection, such as test coverage rate and the number of found defects, although the data can provide valuable insight, the data cannot fully represent the quality and the effect of the automatic test, the test effect cannot be intuitively represented, the result of some tests is easy to be inaccurate, and the automatic test effect needs to be evaluated, so that the value and the quality of the automatic test are judged to be built.
Disclosure of Invention
(one) solving the technical problems
Aiming at the defects of the prior art, the invention provides an evaluation method of an automatic test effect, and solves the problems that the quality and the effect of the automatic test cannot be completely represented, and the test effect cannot be intuitively represented because the data can provide valuable insight when a machine depends on a quantity index to detect, such as test coverage rate and the number of found defects.
(II) technical scheme
In order to achieve the above purpose, the invention is realized by the following technical scheme: an evaluation method of an automatic test effect is characterized in that: the method comprises the following steps:
s1, determining an evaluation aspect: determining key aspects of an automatic test to be evaluated, and testing and evaluating the coverage rate and the reliability of the automatic test;
s2, establishing a base line: recording the performance level and the overall detection effect of the automatic test under the normal state so as to compare and evaluate the improved effect in the later period;
s3, defining targets and indexes: redefining targets and metrics for software or equipment for automated testing by performing hierarchical scoring and enhanced qualitative analysis;
s4, implementing automatic test: comparing the data detected in the automatic test with the reset target and index data in the automatic test software;
s5, collecting and analyzing data: determining the data type and data source required by each index, creating or using a tool to automatically collect data, and analyzing the collected data to determine the performance of various aspects of the improved automated test;
s6, comparing the improved automatic test result with a normal automatic test baseline, and determining the advantages and disadvantages of the improved automatic test;
s7, evaluating maintainability and expandability, namely evaluating the later maintainability and expandability of the improved automatic test based on an evaluation result, and providing a specific improvement plan or suggestion for the improved automatic test;
s8, acquiring feedback, namely recording defects of the improved automatic test, including the severity degree, the repaired time and related test cases, which can help to evaluate the effect of the automatic test in the aspects of finding and tracking the defects;
s9, generating a report: collecting data collected before finishing, and summarizing feedback data and results;
s10, improvement: automated test evaluation is taken as an improvement process for automated testing, and the method for automated testing is continuously improved and perfected through generated reports and files.
Preferably, in step S1, the coverage rate and reliability of the automated test need to be determined, where the coverage rate refers to the percentage of codes covered by the test cases, the reliability refers to the stability in the automated test process, and the structure of the automated test is stable and reliable even if the result of the automated test is not changed under the influence of different environments and external factors.
Preferably, in step S3, redefining the target and the index includes enhancing qualitative analysis, where enhancing qualitative analysis means that by simulating and simulating testing, the effect of the automated test is evaluated, the effect of the automated test after improvement is predicted, and in combination with implementing the process of inspecting the automated test, some problems that may be ignored by the digital index can be found.
Preferably, in step S3, redefining the target and the index includes grading, wherein grading refers to establishing a new testing method, determining value coefficients of different areas to be tested, determining grading standards, and grading the different areas to be tested according to a rule that the size of the score is increased in a same ratio with the value of the different areas.
Preferably, in step S3, redefining the targets and indexes includes continuously tracking and adjusting, where continuously tracking and adjusting means that the market and project requirements change, so that the targets and indexes of the automatic test can be properly adjusted according to the actual requirements of the market.
Preferably, in step S4, the test data is collected, all necessary data is collected, the data is sorted, the sorted data is compared with a set target, and whether the tested data is qualified or not is judged.
Preferably, in step S6, in comparing the improved automated test results with the normal automated test baseline, an in-depth analysis is made as to why these improvements are effective in reaching or exceeding the normal automated test baseline, and an attempt is made to understand the root cause of the defect in the aspect that the normal automated test baseline is not reached.
Preferably, in step S9, the data and analysis results are presented using charts, graphs and visualization tools, so that the effects and trends of the automated test can be more intuitively displayed, and the audience can more easily understand and compare.
(III) beneficial effects
The invention provides an evaluation method of an automatic test effect. The beneficial effects are as follows:
(1) When the evaluation method of the automatic test effect is used, the method for enhancing qualitative analysis and grading is used in redefining targets and indexes, so that grading is carried out on different detected areas according to the rule that the value of the grading is increased in the same ratio with that of different areas, the coverage rate is not only judged, but also important covered parts can be found, and the detection accuracy in the automatic test process is improved.
(2) When the evaluation method of the automatic test effect is used, the continuous tracking and adjusting method is used in redefining the targets and the indexes, so that the targets and the indexes of the automatic test are properly adjusted according to the changes of market and project demands, the automatic test work is performed according with the actual demands of the market, and the flexibility of the automatic test is improved.
(3) When the method for evaluating the automatic test effect is used, the improved automatic test result is compared with a normal automatic test baseline in the process of comparison evaluation, so that the improvement is effective for deep analysis on the aspect of reaching or exceeding the normal automatic test baseline, and the root cause of the defect is tried to be understood on the aspect of not reaching the normal automatic test baseline.
Drawings
FIG. 1 is a flow chart of the present invention;
fig. 2 is a schematic diagram of an S3 workflow architecture according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-2, the present invention provides an evaluation method for an automated test effect, comprising the following steps:
s1, determining an evaluation aspect: determining key aspects of the automatic test to be evaluated, testing and evaluating the coverage rate and the reliability of the automatic test, wherein in step S1, the coverage rate and the reliability of the automatic test are required to be determined, the coverage rate refers to the percentage of codes covered by test cases, the reliability refers to the stability in the automatic test process, and the structure of the automatic test is stable and reliable under the condition that the result of the automatic test is not changed under the influence of different environments and external factors.
S2, establishing a base line: recording the performance level and the overall detection effect of the automatic test in the normal state, wherein the performance level and the overall detection effect comprise test results, execution time and error logs, so that data related to a baseline index is collected and recorded in the test execution process, and the index results obtained through analysis are used as baseline data to be recorded and documented, so that a standard for subsequent comparison and evaluation is established, and the later comparison and evaluation can be performed with the improved automatic test results.
S3, defining targets and indexes: redefining targets and indexes of software or equipment for automatic testing by grading and enhancing qualitative analysis, wherein in step S3, redefining targets and indexes comprises enhancing qualitative analysis, wherein enhancing qualitative analysis means that the effect of automatic testing is evaluated by simulation and emulation testing, the effect of automatic testing after improvement is predicted, and problems possibly ignored by digital indexes can be found by matching with the process of inspecting the automatic testing.
In step S3, redefining the targets and indexes includes grading and grading, which means to establish a new testing method, firstly determining the value coefficients of the different areas to be tested, then determining grading and grading standards, and grading the different areas to be tested according to the rule that the value of the score is increased in the same ratio with the value of the different areas.
In step S3, redefining the targets and indexes includes continuously tracking and adjusting, which means that the market and project requirements change, so that the targets and indexes of the automatic test can be properly adjusted according to the actual requirements of the market.
S4, implementing automatic test: comparing the data detected in the automatic test with the reset target and index data in the automatic test software, collecting the test data in step S4, ensuring that all necessary data are collected, sorting the data, possibly processing missing data, error data and repeated data in the process, and comparing the sorted data with the set target to judge whether the tested data are qualified or not.
S5, collecting and analyzing data: determining the type of data and the source of data required for each index, and creating or using tools to automatically collect data, analyze the collected data, can use appropriate analysis tools and techniques to analyze the data, which can include statistical analysis and visual analysis, through which insight into test quality, performance, stability can be gained, helping you know the overall effectiveness of the automated test, to determine the performance of various aspects of the automated test after improvement.
S6, comparing the improved automatic test result with a normal automatic test baseline, determining advantages and disadvantages of the improved automatic test, in the step S6, in the process of comparing the improved automatic test result with the normal automatic test baseline, deeply analyzing why the improvements are effective in reaching or exceeding the normal automatic test baseline, in attempting to understand the root cause of the defects in the aspect of not reaching the normal automatic test baseline, and identifying potential problems and improvement opportunities on the basis of comparison and evaluation, wherein the problems can relate to finding test areas with low coverage rate, use cases with high error rate and performance bottlenecks, and according to the findings, you can formulate corresponding improvement plans and strategies.
S7, evaluating maintainability and expandability, namely evaluating the later maintainability and expandability of the improved automatic test based on the evaluation result, wherein the evaluation can be performed from the following aspects: firstly, evaluating the structure and organization mode of an automatic test code, wherein the maintainability and expandability of the code can be evaluated by checking the layering, module division, class and function design aspects of the code; secondly, evaluating the coupling degree and the dependency in the codes, checking the dependency relationship between the codes, reducing the coupling between the modules as much as possible, and improving the maintainability and the expandability of the codes by using a proper design mode and decoupling technology; thirdly, checking whether repeated codes and logic exist or not, extracting reusable code segments by carrying out code reconstruction, eliminating the repeated logic and improving the maintainability and expandability of codes.
Based on the three points, a specific improvement plan or suggestion is provided for the improved automatic test so as to improve the maintainability and expandability of the test code.
S8, acquiring feedback, namely recording defects of the improved automatic test, including the severity degree, the repaired time and related test cases, which can help to evaluate the effect of the automatic test in the aspects of finding and tracking the defects;
meanwhile, feedback is not simply acquired by means of data, investigation tests are conducted on different users, the investigation tests are conducted on the different users in a refined mode, the users are roughly divided into users and development users according to different crowds of the users, the developers are subjected to independent investigation, the internal structural problems of the automatic test are known, the users are subjected to independent investigation, the problems during the use of the automatic test are known, the investigation and investigation are conducted simultaneously, the mode of considering the problems is more comprehensive, and the obtained feedback problems are more specific.
S9, generating a report: firstly, cleaning and arranging data collected before the management, ensuring the accuracy and consistency of the data, and sorting, screening and de-duplicating the data by using a data processing tool so as to better organize and analyze the data;
the structure and content of the report is then determined. Determining which information is contained in the report, such as test execution results, coverage rate data and performance indexes, and creating a basic framework of the report, wherein the basic framework comprises a title, a catalog, a abstract and a result analysis part;
finally, the data are visualized by means of charts, tables, images and the like, the visual data can more intuitively present test results and trends, proper visual tools and chart types such as line charts, bar charts, pie charts and the like are selected, feedback data and results are summarized, and the data and analysis results are presented, so that audiences can more easily understand and compare the results.
S10, improvement: the improvement and promotion are carried out from two aspects, and the first is a theoretical result obtained from data analysis, corresponding adjustment and change are carried out according to data, and methods such as data mining, machine learning and the like can be considered to explore and find potential modes, anomalies and correlations. This allows for in-depth understanding of the influencing factors of the automated test and provides more specific guidance for improvement.
And secondly, improving and promoting the system from the group, and jointly evaluating the automatic test effect by cooperating with a test team, a development team and a business team, listening to the comments and feedback of different stakeholders and knowing the opinion and experience of the stakeholders on the automatic test. By collaborative assessment, the impact of automated testing on teams and projects can be fully understood and improved based on feedback opinion.
The improvement is carried out from two aspects simultaneously, the evaluation of the automatic test effect is taken as a continuous improvement process, the effect of the automatic test is periodically reviewed and evaluated, an improvement plan is formulated according to the evaluation result, and improvement measures are brought into daily practice of a test team to continuously optimize the quality and efficiency of the automatic test.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the spirit and scope of the invention as defined by the appended claims and their equivalents.

Claims (8)

1. The evaluation method of the automatic test effect is characterized by comprising the following steps of: the method comprises the following steps:
s1, determining an evaluation aspect: determining key aspects of an automatic test to be evaluated, and testing and evaluating the coverage rate and the reliability of the automatic test;
s2, establishing a base line: recording the performance level and the overall detection effect of the automatic test under the normal state so as to compare and evaluate the improved effect in the later period;
s3, defining targets and indexes: redefining targets and metrics for software or equipment for automated testing by performing hierarchical scoring and enhanced qualitative analysis;
s4, implementing automatic test: comparing the data detected in the automatic test with the reset target and index data in the automatic test software;
s5, collecting and analyzing data: determining the data type and data source required by each index, creating or using a tool to automatically collect data, and analyzing the collected data to determine the performance of various aspects of the improved automated test;
s6, comparing the improved automatic test result with a normal automatic test baseline, and determining the advantages and disadvantages of the improved automatic test;
s7, evaluating maintainability and expandability, namely evaluating the later maintainability and expandability of the improved automatic test based on an evaluation result, and providing a specific improvement plan or suggestion for the improved automatic test;
s8, acquiring feedback, namely recording defects of the improved automatic test, including the severity degree, the repaired time and related test cases, which can help to evaluate the effect of the automatic test in the aspects of finding and tracking the defects;
s9, generating a report: collecting data collected before finishing, and summarizing feedback data and results;
s10, improvement: automated test evaluation is taken as an improvement process for automated testing, and the method for automated testing is continuously improved and perfected through generated reports and files.
2. The method for evaluating automated test effects according to claim 1, wherein: in step S1, the coverage rate and reliability of the automated test need to be determined, the coverage rate refers to the percentage of codes covered by the test cases, the reliability refers to the stability in the process of the automated test, and the structure of the automated test is stable and reliable even if the result of the automated test is not changed under the influence of different environments and external factors.
3. The method for evaluating automated test effects according to claim 1, wherein: in step S3, redefining the target and the index includes enhancing qualitative analysis, where enhancing qualitative analysis means that by simulating and simulating test, the effect of the automated test is evaluated, the effect of the automated test after improvement is predicted, and the process of inspecting the automated test is matched, so that some problems possibly ignored by the digital index can be found.
4. The method for evaluating automated test effects according to claim 1, wherein: in step S3, redefining the targets and indexes includes grading and grading, which means to establish a new testing method, firstly determining the value coefficients of the different areas to be tested, then determining grading and grading standards, and grading the different areas to be tested according to the rule that the value of the score is increased in the same ratio with the value of the different areas.
5. The method for evaluating automated test effects according to claim 1, wherein: in step S3, redefining the targets and indexes includes continuously tracking and adjusting, which means that the market and project requirements change, so that the targets and indexes of the automatic test can be properly adjusted according to the actual requirements of the market.
6. The method for evaluating automated test effects according to claim 1, wherein: in step S4, collecting test data, ensuring that all necessary data are collected, sorting the data, comparing the sorted data with a set target, and judging whether the tested data are qualified or not.
7. The method for evaluating automated test effects according to claim 1, wherein: in step S6, in the process of comparing the improved automated test results with the normal automated test baseline, deep analysis is made as to why these improvements are effective in reaching or exceeding the normal automated test baseline, and attempts are made to understand the root cause of the defect in the aspect that the normal automated test baseline is not reached.
8. The method for evaluating automated test effects according to claim 1, wherein: in step S9, the data and the analysis result are presented by using the graph, the graph and the visualization tool, so that the effect and the trend of the automated test can be more intuitively displayed, and the audience can more easily understand and compare.
CN202311050418.9A 2023-08-18 2023-08-18 Evaluation method for automatic test effect Pending CN117033226A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311050418.9A CN117033226A (en) 2023-08-18 2023-08-18 Evaluation method for automatic test effect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311050418.9A CN117033226A (en) 2023-08-18 2023-08-18 Evaluation method for automatic test effect

Publications (1)

Publication Number Publication Date
CN117033226A true CN117033226A (en) 2023-11-10

Family

ID=88627876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311050418.9A Pending CN117033226A (en) 2023-08-18 2023-08-18 Evaluation method for automatic test effect

Country Status (1)

Country Link
CN (1) CN117033226A (en)

Similar Documents

Publication Publication Date Title
CN109784758B (en) Engineering quality supervision early warning system and method based on BIM model
US20090007078A1 (en) Computer-Implemented Systems And Methods For Software Application Testing
CN111752833B (en) Software quality system approval method, device, server and storage medium
Yang et al. Vuldigger: A just-in-time and cost-aware tool for digging vulnerability-contributing changes
CN109189407A (en) Statistical method, system, device and the storage medium of a kind of pair of multi-chip burning
CN115328784A (en) Agile interface-oriented automatic testing method and system
CN110659201A (en) Intelligent test analysis system for safety technology protection engineering
CN112966921A (en) Innovation ability evaluation method based on programming ability evaluation
CN116954624A (en) Compiling method based on software development kit, software development system and server
CN117033226A (en) Evaluation method for automatic test effect
Aktaş et al. A learning-based bug predicition method for object-oriented systems
CN113641573B (en) Program analysis software automatic test method and system based on revision log
Yan et al. Revisiting the correlation between alerts and software defects: A case study on myfaces, camel, and cxf
Söylemez et al. Using process enactment data analysis to support orthogonal defect classification for software process improvement
CN115274113A (en) Evaluation system based on psychological assessment scale scoring algorithm
CN115062315A (en) Multi-tool inspection-based security code examination method and system
VanHilst et al. Repository mining and Six Sigma for process improvement
CN111367789A (en) Static report merging analysis techniques
CN111831541B (en) Software defect positioning method based on risk track
CN113485906B (en) Method for testing statistical data in financial cloud platform
Irrazábal et al. Alignment of Open Source Tools with the New ISO 25010 Standard-Focus on Maintainability
Taba et al. A web-based model for inspection inconsistencies resolution: A new approach with two case studies
Puspaningrum et al. Vulnerable Source Code Detection using SonarCloud Code Analysis
Sun Optimize defect detection techniques through empirical software engineering method
Duka et al. Fault Slip Through measurement in software development process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination