CN112363920A - Test case repairing method and device, computer equipment and storage medium - Google Patents

Test case repairing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112363920A
CN112363920A CN202011231724.9A CN202011231724A CN112363920A CN 112363920 A CN112363920 A CN 112363920A CN 202011231724 A CN202011231724 A CN 202011231724A CN 112363920 A CN112363920 A CN 112363920A
Authority
CN
China
Prior art keywords
data
case
test
test case
target test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011231724.9A
Other languages
Chinese (zh)
Inventor
雷达伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Pinwei Software Co Ltd
Original Assignee
Guangzhou Pinwei Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Pinwei Software Co Ltd filed Critical Guangzhou Pinwei Software Co Ltd
Priority to CN202011231724.9A priority Critical patent/CN112363920A/en
Publication of CN112363920A publication Critical patent/CN112363920A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application relates to a method and a device for repairing a test case, computer equipment and a storage medium. The method comprises the following steps: acquiring a target test case, wherein the target test case is a test case with execution failure times meeting set conditions; analyzing the test failure reason of the target test case; if the analyzed test failure reason is that the case data is abnormal, acquiring the case data of the target test case, and extracting characteristic information from the case data; selecting current replacement data from a standby test data set corresponding to the case data according to the characteristic information; and replacing the case data of the target test case by using the current replacement data. By adopting the method, the repair time of the test case can be reduced, and the repair efficiency can be improved.

Description

Test case repairing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of software testing technologies, and in particular, to a method and an apparatus for repairing a test case, a computer device, and a storage medium.
Background
With the development of software testing technology, in order to measure a specific target in the testing process, a test case compiling mode is generally adopted. However, the test cases often generate exceptions during execution, thereby causing test failures.
Because there are many reasons for causing the test case execution failure, for example, the change of case logic, the problem of case data, or the change of test object, the repairing method for different test failure reasons is also complicated and variable. In the conventional technology, errors are generally fed back to a tester only through a simple feedback mechanism aiming at test cases with execution failure, the tester needs to manually correct the errors, the requirements on the error checking, correcting and correcting capabilities of the tester are high, the test case repairing time is greatly increased, and the repairing efficiency is reduced.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a test case repairing method, device, computer equipment, and storage medium capable of improving the repairing efficiency of a test case.
A method for repairing a test case comprises the following steps:
acquiring a target test case, wherein the target test case is a test case with execution failure times meeting set conditions;
analyzing the test failure reason of the target test case;
if the analyzed test failure reason is that the case data is abnormal, acquiring the case data of the target test case, and extracting characteristic information from the case data;
selecting current replacement data from a standby test data set corresponding to the case data according to the characteristic information;
and replacing the case data of the target test case by using the current replacement data.
In one embodiment, extracting feature information from use case data includes:
and extracting request characteristic information from the interface request data of the use case data.
In one embodiment, extracting feature information from use case data includes:
and extracting response characteristic information from interface response data of the use case data.
In one embodiment, extracting feature information from use case data includes:
and extracting attribute feature information of the case data from the data attribute dimension.
In one embodiment, selecting current replacement data from a standby test data set corresponding to case data according to the characteristic information includes:
converting the characteristic information into a characteristic vector, and calculating the similarity between the standby test data in the standby test data set and the case data according to the characteristic vector;
and selecting the current replacement data from the standby test data with the similarity larger than a preset threshold value.
In one embodiment, before replacing case data of the target test case with the current replacement data, the method further includes:
using the current replacement data as case data of a target test case, generating a case to be verified, and verifying whether the case to be verified can be successfully executed;
and if so, executing the step of replacing the case data of the target test case by using the current replacing data.
In one embodiment, the method further comprises:
if not, re-selecting the replacement data from the standby test data set corresponding to the case data according to the characteristic information, taking the re-selected replacement data as the current replacement data, and generating the case to be verified by taking the current replacement data as the case data of the target test case.
In one embodiment, the method further comprises:
if not, generating first prompt information representing that the test failure is caused by abnormal case data.
In one embodiment, analyzing the test failure reason of the target test case includes:
acquiring a plurality of execution data of a target test case; wherein the execution data comprises use case data;
respectively calling corresponding scoring models according to the execution data, and respectively carrying out data validity scoring on the execution data by utilizing the scoring models;
and analyzing the test failure reason of the target test case according to the grading result of each grading model.
In one embodiment, analyzing the test failure reason of the target test case according to the scoring result of each scoring model includes:
determining the weight of the target test case execution failure caused by each execution data according to the grade of each grading model;
and determining the test failure reason of the target test case according to the execution data with the largest weight.
In one embodiment, the method further comprises:
and if the analyzed test failure reason is not the case data abnormity, generating second prompt information representing that manual repair is to be performed.
In one embodiment, the second prompt includes a test failure reason.
In one embodiment, the method further comprises: and if the analyzed test failure reason is not the case data abnormity, calling other abnormity repairing models which are not specific to the case data abnormity to repair the target test case.
A device for repairing a test case, the device comprising:
the target test case acquisition module is used for acquiring a target test case, wherein the target test case is a test case with execution failure times meeting set conditions;
the failure reason analysis module is used for analyzing the test failure reason of the target test case;
the characteristic information extraction module is used for acquiring the case data of the target test case and extracting the characteristic information from the case data if the case data is analyzed to be abnormal due to the test failure;
the replacement data selection module is used for selecting current replacement data from the standby test data set corresponding to the case data according to the characteristic information;
and the case data replacing module is used for replacing the case data of the target test case by using the current replacing data.
A computer device comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the repairing steps of the test cases when executing the computer program.
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of repairing test cases described above.
According to the method, the device, the computer equipment and the storage medium for repairing the test case, by analyzing the test failure reason of the target test case, when the fact that the execution failure is caused by abnormal case data is analyzed, the characteristic information can be extracted from the case data of the target test case, the replacement data can be screened out from the standby test data set according to the characteristic information, and the abnormal case data is replaced by the replacement data, so that the purpose of efficiently and automatically repairing the target test case is achieved.
Drawings
FIG. 1 is a diagram of an application environment of a method for repairing test cases in one embodiment;
FIG. 2 is a flowchart illustrating a method for repairing test cases according to an embodiment;
FIG. 3 is a flowchart illustrating the steps of analyzing the cause of the test failure of the target test case in one embodiment;
FIG. 4 is a technical architecture diagram for analyzing the cause of test failure of test cases in an application embodiment;
FIG. 5 is a technical architecture diagram for repairing case data of a test case in an application embodiment;
FIG. 6 is a block diagram showing a structure of a device for repairing a test case according to an embodiment;
FIG. 7 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The method for repairing the test case can be applied to the application environment shown in fig. 1. The server 102 pulls a target test case from an execution result message queue of the automated test platform 104, calls a test failure reason analysis model to analyze a test failure reason of the target test case, acquires case data of the target test case if the analyzed test failure reason is abnormal case data, extracts feature information from the case data, selects current replacement data from a standby test data set corresponding to the case data according to the feature information, and replaces the case data of the target test case with the current replacement data. The server 102 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In an embodiment, as shown in fig. 2, a method for repairing a test case is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
step S10: and acquiring a target test case, wherein the target test case is a test case with execution failure times meeting set conditions.
The target test case is a test case with execution failure times meeting set conditions, and the set conditions can be set according to test requirements.
Specifically, the interface automation test platform performs failure marking on the test case with execution failure, the server pulls the test case with execution failure from a case execution result message queue of the interface automation test platform, and judges whether the number of times of execution failure of the test case meets a set condition according to the failure marking, and if the number of times of execution failure of the test case meets the set condition, the test case is used as a target test case.
Step S20: and analyzing the test failure reason of the target test case.
The test case execution failure reason is the reason for the test case execution failure. Specifically, the reason for the test case execution failure can be analyzed through data carried in the target test case or other intermediate data generated in the execution process. For example, the analysis and evaluation of the test failure reason of the target test case execution failure may be performed from at least one dimension according to interface request data, interface response data, execution result data, execution environment data, system connectivity data, middleware state data, execution end service state data or response end service state data, and the like.
Step S30: and if the analyzed test failure reason is that the case data is abnormal, acquiring the case data of the target test case, and extracting characteristic information from the case data.
The case data refers to test data used for performing an interface test in the test case, and may include interface request data and expected interface response data. Specifically, the case data contains a large amount of test information, the feature information can be extracted from at least one dimension of the case data according to the preset conditions, the extraction mode and the number of the extracted features are not limited, but the extracted feature information is the basic information capable of reaching the preset target and reflecting the features of the case data.
Step S40: and selecting current replacement data from the standby test data set corresponding to the case data according to the characteristic information.
The standby test data set comprises a plurality of standby test data which can be used for interface test, the source of the standby test data is not limited, the standby test data can be test data manually input by a user, and the standby test data can also be test data automatically recorded by a system in the interface history test process. The current replacement data is standby test data which is selected by the server from the standby test data set and is used for replacing case data in the target test case.
Specifically, the server may extract feature information of each piece of standby test data in the standby test data set according to an extraction manner and dimensions of case data feature information of the target test case, and compare and analyze the feature information of the case data and the feature information of each piece of standby test data, thereby screening the standby test data with similar feature information as the current replacement data.
Step S50: and replacing the case data of the target test case by using the current replacement data.
Specifically, case data in the target test case is replaced by current replacement data selected from the standby test data, so that abnormal case data of the target test case with failed execution is repaired.
According to the method for repairing the test case, by analyzing the test failure reason of the target test case, when the fact that the execution failure is caused by abnormal case data is analyzed, the characteristic information can be extracted from the case data of the target test case, the replacement data can be screened out from the standby test data set according to the characteristic information, and the abnormal case data is replaced by the replacement data, so that the purpose of efficiently and automatically repairing the target test case is achieved.
In one embodiment, extracting feature information from use case data includes: and extracting request characteristic information from the interface request data of the use case data.
In the embodiment, the extraction of the request characteristic information can be carried out on the use case data from the request dimension of the interface, for example, the characteristic extraction can be carried out from at least one aspect of 1, a request head; 2. request method (for HTTP protocol type data); 3. a request body field type; 4. requesting body field content; 5. body field weight is requested. The interface request data is an important component of the case data, the characteristic extraction is carried out from the request dimensionality of the interface, information which fully reflects the characteristic of the case request can be obtained, the accuracy of selection of subsequent replacement data is improved, and therefore the accuracy of repairing abnormal case data of the test case is improved.
In one embodiment, extracting feature information from use case data includes: and extracting response characteristic information from interface response data of the use case data.
In the embodiment, response characteristic information can be extracted from the response dimension of the interface for the case data, for example, the characteristic can be extracted from at least one of the following aspects of 1, response head; 2. response encoding (data for HTTP protocol type); 3. a response body field type; 4. responding to the body field content; 5. the response body field weight. The interface response data is an important component of the case data, the characteristic extraction is carried out from the response dimension of the interface, the information which fully reflects the response characteristic of the case can be obtained, the accuracy of the selection of the subsequent replacement data is improved, and therefore the accuracy of the repair of the abnormal case data of the test case is improved.
In one embodiment, extracting feature information from use case data includes: and extracting attribute feature information of the case data from the data attribute dimension.
In the embodiment, the attribute feature information can be extracted from the data attribute dimension of the use case data, for example, the feature can be extracted from at least one of the following aspects of 1, data date; 2. a source of data; 3. frequency of data usage; 4. the data success rate. The data attribute can reflect the characteristics of the data, and the accuracy of selecting the replacement data from the standby test data with similar attributes can be improved, so that the accuracy of repairing abnormal case data of the test case is improved.
In one embodiment, selecting current replacement data from a standby test data set corresponding to case data according to the characteristic information includes: converting the characteristic information into a characteristic vector, and calculating the similarity between the standby test data in the standby test data set and the case data according to the characteristic vector; and selecting the current replacement data from the standby test data with the similarity larger than a preset threshold value.
In this embodiment, the extracted feature information of the target test case may be converted into a feature vector, and the similarity between the case data and each piece of standby test data in the set of standby test data may be calculated by using a similarity calculation method based on the feature vector and the feature vector of each piece of standby test data. By calculating the similarity between the case data and the standby test data, the feature information between the case data and the standby test data can be digitalized, so that the relationship between the case data and the standby test data is clearer, and the replacement data can be more accurately selected from the standby test data. The similarity calculation method is not limited, and may be a pearson correlation coefficient method.
Further, according to the calculated similarity, the standby test data with the similarity larger than the preset threshold value can be used as candidate replacement data, and further, appropriate data are selected from the candidate replacement data according to the sequence from high to low in similarity to serve as current replacement data.
In one embodiment, before replacing case data of the target test case with the current replacement data, the method further includes: using the current replacement data as case data of a target test case, generating a case to be verified, and verifying whether the case to be verified can be successfully executed; and if so, executing the step of replacing the case data of the target test case by using the current replacing data.
In this embodiment, before replacing case data of a target test case with current replacement data selected from standby test data, validity of the current replacement data may be verified, that is, whether the current replacement data can enable the target test case to recover normal execution is verified. Specifically, the current replacement data may be used as case data of the target test case to generate a to-be-tested case, the to-be-tested case is executed and verified, and if the to-be-tested case is successfully executed, it indicates that the replaced target test case can be restored to normal operation after the case data of the target test case is replaced by the current replacement data, so that the step of replacing the case data of the target test case by the current replacement data may be performed. By adding the step of verifying the effectiveness of the current replacement data, the accuracy of repairing the test case can be further improved.
In one embodiment, the method further comprises: if not, re-selecting the replacement data from the standby test data set corresponding to the case data according to the characteristic information, taking the re-selected replacement data as the current replacement data, and generating the case to be verified by taking the current replacement data as the case data of the target test case.
In this embodiment, if the validity verification of the current replacement data fails, it is described that after the case data of the target test case is replaced with the current replacement data, the target test case after data replacement cannot be restored to normal operation, and therefore, new replacement data needs to be continuously selected from the standby test data, and the newly selected replacement data is used as the current replacement data again to perform validity verification until the replacement data meeting the conditions is found from the standby test data set.
In an embodiment, if the generated case to be verified fails to be executed, first prompt information representing that the case to be repaired is to be repaired manually may be generated, where the first prompt information includes information representing that the reason for the test failure is the exception of the case data. The first prompt message can be further sent to a terminal where a tester belonging to the target test case logs in. And the tester can carry out manual repair on the test case according to the first prompt message. Because the first prompt information contains the information representing that the reason of the test failure is the abnormal data of the use case, a tester can directly repair the test case according to the first prompt information without performing additional error checking work, and the problem of reduction of the repair efficiency caused by long time consumption of the error checking work is solved.
Preferably, before generating the warning representing that the target test case is to be repaired manually, it may be further determined whether all the standby test data with similarity greater than a preset threshold in the standby test data set cannot be used as the replacement data to replace the case data of the target test case, and if so, the first prompt information representing that the target test case is to be repaired manually is generated.
In one embodiment, as shown in fig. 3, analyzing the reason for the test failure of the target test case includes: step S202: acquiring a plurality of execution data of a target test case; wherein the execution data comprises use case data; step S204: respectively calling corresponding scoring models according to the execution data, and respectively carrying out data validity scoring on the execution data by utilizing the scoring models; step S206: and analyzing the test failure reason of the target test case according to the grading result of each grading model.
In this embodiment, the effectiveness of the data may be scored using a scoring model that performs an analytical assessment for each type of execution data. The failure reason of the target test case can be more comprehensively known according to the grading result of each grading model, so that the repairing strategy can be more accurately selected, and the efficiency of failure repairing is improved.
Specifically, the execution data may include: the corresponding scoring model may include, for example, use case data, execution environment data, system logic data, middleware state data, execution side service state data, and response side service state data: the system comprises a case data factor scoring model, an environment factor scoring model, a logic factor scoring model, a middleware factor scoring model, an execution end factor scoring model and a response end factor scoring model.
The use case data factor scoring model analyzes and scores the effectiveness of the use case data from at least one of the following aspects: 1. the date of the last change of the case data; 2. the source of case data change (automatic system update or tester submission); 3. after the case data is changed for the last time, the execution success rate of the case is increased; 4. before the case data is changed for the last time, the execution success rate of the case is increased; 5. and difference fields before and after the last change of the case data.
The environment factor scoring model analyzes and scores the effectiveness of the execution environment data from at least one of the following aspects: 1. a test environment type; 2. availability of a test environment; 3. an interface relates to connectivity between systems.
The logic factor scoring model analyzes and scores the effectiveness of the system logic data from at least one of the following aspects: 1. the date is changed by the use case logic for the last time; 2. using the current compiling state of the case code; 3. the execution success rate of the case logic after the last change; 4. the execution success rate of the case logic before the last change; 5. a changer who makes a use case logical change; 6. a changer who changes the case logic last time; 7. the use case logic code depends on the package change condition.
The executive end factor scoring model analyzes and scores the effectiveness of the executive end service state data from at least one of the following aspects: 1. server state of the execution end; 2. the network state of the execution end; 3. executing the running state of the test case at the execution end; 4. the execution end runs the test frame version of the test case; 5. and the execution end runs each dependent package version of the test case.
The response side factor scoring model analyzes and scores the effectiveness of the response side service state data from at least one of the following aspects: 1. number of instances of the responder service; 2. the availability of each instance of the response end service; 3. response delay of the whole service of the response terminal; 4. the success rate of each instance served by the response end; 5. the latest successful response date of the same interface of the response end; 6. the response success rate of the same interface of the response end; 7. the latest deployment date of the response end service; 8. the last change date of the interface source code at the response end; 9. and the date of the last successful response to the same request parameter after the interface source code of the response end is changed for the last time.
In one embodiment, analyzing the test failure reason of the target test case according to the scoring result of each scoring model includes: determining the weight of the target test case execution failure caused by each execution data according to the grade of each grading model; and determining the test failure reason of the target test case according to the execution data with the largest weight.
In this embodiment, after a plurality of scoring tasks are invoked and the corresponding scoring model is used to score the effectiveness of each piece of execution data for different pieces of execution data, a comprehensive and comprehensive scoring result can be obtained. Wherein, the lower the effectiveness score, the higher the weight of the execution data which causes the case execution failure. And determining the reason of the execution failure caused by the execution data with the largest weight as the test failure reason. By integrating the scores and calculating the weight, the failure reason can be more accurately obtained, so that the accuracy of subsequent automatic repair is improved.
In one embodiment, the method further comprises: and if the reason of the test failure is analyzed to be not the case data abnormity, generating second prompt information representing that the case data is to be repaired manually, or calling other abnormity repair models which are not specific to the case data abnormity to repair the target test case.
In this embodiment, if it is analyzed that the reason for the test failure is not that the case data is abnormal, second prompt information representing that the case data is to be repaired manually may be generated to achieve the purpose of timely reminding the tester of performing manual repair. Other repair models may also be called, for example, if it is analyzed that the reason for the test failure is the abnormality of the middleware state data, the repair model of the middleware state data may be called to perform automatic repair, and the like.
In one embodiment, the second prompt may include a reason for the failure of the test. The tester can specifically repair the use case according to the test failure reason contained in the second prompt message without performing additional error checking work, and can directly repair the use case according to the first prompt message, so that the problem of low repair efficiency caused by long time consumption of the error checking work is solved.
The following further describes the method for repairing the test case according to the present application with reference to an application example.
As shown in fig. 4, fig. 4 is a technical architecture diagram for analyzing the cause of test failure of a test case in an application example, and the technical architecture diagram is described as follows:
(1) and setting a case result monitor to monitor a test result message queue of the automatic test platform, and judging the execution failure times of the monitored test cases according to the failure marks of the monitored test cases.
(2) And if the test case which fails to be executed for the first time is executed, the failure mark is modified, and then a request for executing the test case again is sent to the automatic test platform.
(3) If the test case is not the test case which fails to be executed for the first time, the test case can be determined as the target test case.
(4) Acquiring execution data of the target test case, and calling a plurality of analysis tasks according to the execution data, wherein the analysis tasks may include: the system comprises an environment analysis task, a middleware analysis task, an execution end analysis task, a response end analysis task, a use case logic analysis task and a use case data analysis task.
(5) And each analysis task scores corresponding execution data by using different scoring models respectively, and comprehensively analyzes the scoring result of each scoring model, so that the test failure reason of the target test case execution failure can be determined and stored in the database.
(6) And the analysis result monitoring task for monitoring the test failure reason acquires the analysis result of the failure reason from the database, and automatically repairs the target test case according to whether the test failure reason is the abnormal case data or not by calling the abnormal case data repair task aiming at the abnormal case data or other repair tasks aiming at the abnormal non-case data.
As shown in fig. 5, fig. 5 is a technical architecture diagram for repairing use case data of a test case in an application example, and the technical architecture diagram is described as follows:
(1) and the server calls the case data of the target test case from the database and repairs the case data.
(2) And extracting request characteristics, response characteristics and data characteristics from the use case data to generate a characteristic information set of the use case data.
(3) The server extracts the request characteristics, the response characteristics and the data characteristics of each spare test data from the spare test data set to form a characteristic information set of the spare test data.
(4) And comparing the characteristics of the case data and the other standby test data according to the two characteristic information sets, and selecting the test data with the highest similarity with the case data from the standby test data set through similarity calculation as the current replacement data.
(5) And verifying whether the current replacement data is valid, specifically, generating a to-be-verified case by taking the selected current replacement data as case data of the target test case, executing the to-be-verified case, and judging whether the execution can be successful.
(6) If the case to be verified is successfully executed, the case data of the target test case is replaced by the current replacement data, and furthermore, testers to which the case belongs can be notified that the replacement is successful.
(7) If the execution of the case to be verified is unsuccessful, whether the standby test data set contains test data with the similarity larger than a preset threshold value or not is judged, if yes, the test data is picked again from the test data with the similarity larger than the preset threshold value to serve as the current replacement data, and the step of verifying the current replacement data is continued.
(8) If the test data with the similarity larger than the preset threshold in the standby test data set are not verified successfully, the test personnel can be informed to carry out manual repair.
It should be understood that although the various steps in the flow charts of fig. 1-3 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-3 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 6, there is provided a device for repairing a test case, including: the system comprises a target case obtaining module 10, a failure reason analyzing module 20, a characteristic information extracting module 30, a replacement data selecting module 40 and a case data replacing module 50, wherein:
the target case obtaining module 10 is configured to obtain a target test case, where the target test case is a test case whose execution failure times satisfy a set condition.
And the failure reason analysis module 20 is configured to analyze the test failure reason of the target test case.
The characteristic information extraction module 30 is configured to, if it is analyzed that the test failure reason is that the case data is abnormal, obtain the case data of the target test case, and extract characteristic information from the case data.
And the replacement data selecting module 40 is configured to select current replacement data from the standby test data set corresponding to the case data according to the feature information.
And a use case data replacing module 50, configured to replace use case data of the target test case with the current replacement data.
In one embodiment, the feature information extraction module 30 extracts the request feature information from the interface request data of the use case data.
In one embodiment, feature information extraction module 30 extracts response feature information from interface response data of the use case data.
In one embodiment, feature information extraction module 30 extracts attribute feature information of use case data from the data attribute dimension.
In one embodiment, the replacement data selecting module 40 converts the feature information into a feature vector, and calculates the similarity between the standby test data in the standby test data set and the case data according to the feature vector; and selecting the current replacement data from the standby test data with the similarity larger than a preset threshold value.
In one embodiment, the replacement data selecting module 40 is further configured to use the current replacement data as case data of the target test case, generate a case to be verified, and verify whether the case to be verified can be successfully executed; and if so, executing the step of replacing the case data of the target test case by using the current replacing data.
In an embodiment, the replacement data selecting module 40 is further configured to, if not, reselect replacement data from a standby test data set corresponding to the case data according to the feature information, use the reselected replacement data as current replacement data, and perform a step of generating a case to be verified by using the current replacement data as the case data of the target test case; or the like, or, alternatively,
in one embodiment, the replacement data extracting module 40 is further configured to generate an alert indicating that there is a need for manual repair if not.
In one embodiment, the failure cause analysis module 20 obtains a plurality of execution data of the target test case; wherein the execution data comprises use case data; respectively calling corresponding scoring models according to the execution data, and respectively carrying out data validity scoring on the execution data by utilizing the scoring models; and analyzing the test failure reason of the target test case according to the grading result of each grading model.
In one embodiment, the failure cause analysis module 20 determines the weight of the target test case execution failure caused by each execution data according to the level of each score model score; and determining the test failure reason of the target test case according to the execution data with the largest weight.
In an embodiment, the failure cause analysis module 20 is further configured to generate an alarm indicating that the test case data is abnormal, if it is determined that the test failure cause is not the case data abnormality.
In one embodiment, the failure cause analysis module 20 is further configured to invoke other exception repair models that are not directed to the case data exception to repair the target test case.
For specific limitations of the device for repairing test cases, reference may be made to the above limitations of the method for repairing test cases, which are not described herein again. All or part of each module in the repair device of the test case can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used to store spare test data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method for repairing test cases.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program: acquiring a target test case, wherein the target test case is a test case with execution failure times meeting set conditions; analyzing the test failure reason of the target test case; if the analyzed test failure reason is that the case data is abnormal, acquiring the case data of the target test case, and extracting characteristic information from the case data; selecting current replacement data from a standby test data set corresponding to the case data according to the characteristic information; and replacing the case data of the target test case by using the current replacement data.
In one embodiment, when the processor executes the computer program to extract the feature information from the use case data, the following steps are specifically implemented: and extracting request characteristic information from the interface request data of the use case data.
In one embodiment, when the processor executes the computer program to extract the feature information from the use case data, the following steps are specifically implemented: and extracting response characteristic information from interface response data of the use case data.
In one embodiment, when the processor executes the computer program to extract the feature information from the use case data, the following steps are specifically implemented: and extracting attribute feature information of the case data from the data attribute dimension.
In one embodiment, when the processor executes the computer program to select the current replacement data from the standby test data set corresponding to the case data according to the characteristic information, the following steps are specifically implemented: converting the characteristic information into a characteristic vector, and calculating the similarity between the standby test data in the standby test data set and the case data according to the characteristic vector; and selecting the current replacement data from the standby test data with the similarity larger than a preset threshold value.
In one embodiment, before the processor executes the computer program to replace case data of the target test case with the current replacement data, the following steps are also implemented: using the current replacement data as case data of a target test case, generating a case to be verified, and verifying whether the case to be verified can be successfully executed; and if so, executing the step of replacing the case data of the target test case by using the current replacing data.
In one embodiment, the processor, when executing the computer program, further performs the steps of: if not, re-selecting the replacement data from the standby test data set corresponding to the case data according to the characteristic information, taking the re-selected replacement data as the current replacement data, and entering a step of generating a case to be verified by taking the current replacement data as the case data of the target test case.
In one embodiment, the processor, when executing the computer program, further performs the steps of: if not, generating a warning for representing that the artificial repair is needed.
In one embodiment, when the processor executes the computer program to analyze the test failure reason of the target test case, the following steps are specifically implemented: acquiring a plurality of execution data of a target test case; wherein the execution data comprises use case data; respectively calling corresponding scoring models according to the execution data, and respectively carrying out data validity scoring on the execution data by utilizing the scoring models; and analyzing the test failure reason of the target test case according to the grading result of each grading model.
In one embodiment, when the processor executes the computer program to analyze the test failure reason of the target test case according to the scoring result of each scoring model, the following steps are specifically implemented: determining the weight of the target test case execution failure caused by each execution data according to the grade of each grading model; and determining the test failure reason of the target test case according to the execution data with the largest weight.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and if the analyzed test failure reason is not the case data abnormity, generating a warning for representing that manual repair is needed. In one embodiment, the processor, when executing the computer program, further performs the steps of: and if the analyzed test failure reason is not the case data abnormity, calling other abnormity repairing models which are not specific to the case data abnormity to repair the target test case.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring a target test case, wherein the target test case is a test case with execution failure times meeting set conditions; analyzing the test failure reason of the target test case; if the analyzed test failure reason is that the case data is abnormal, acquiring the case data of the target test case, and extracting characteristic information from the case data; selecting current replacement data from a standby test data set corresponding to the case data according to the characteristic information; and replacing the case data of the target test case by using the current replacement data.
In one embodiment, when the computer program is executed by a processor to extract feature information from use case data, the following steps are specifically implemented: and extracting request characteristic information from the interface request data of the use case data.
In one embodiment, when the computer program is executed by a processor to extract feature information from use case data, the following steps are specifically implemented: and extracting response characteristic information from interface response data of the use case data.
In one embodiment, when the computer program is executed by a processor to extract feature information from use case data, the following steps are specifically implemented: and extracting attribute feature information of the case data from the data attribute dimension.
In one embodiment, when the computer program is executed by the processor to realize selection of current replacement data from a standby test data set corresponding to case data according to the characteristic information, the following steps are specifically realized: converting the characteristic information into a characteristic vector, and calculating the similarity between the standby test data in the standby test data set and the case data according to the characteristic vector; and selecting the current replacement data from the standby test data with the similarity larger than a preset threshold value.
In one embodiment, before the computer program is executed by the processor to replace the case data of the target test case with the current replacement data, the following steps are also implemented: using the current replacement data as case data of a target test case, generating a case to be verified, and verifying whether the case to be verified can be successfully executed; and if so, executing the step of replacing the case data of the target test case by using the current replacing data.
In one embodiment, the computer program when executed by the processor further performs the steps of: if not, re-selecting the replacement data from the standby test data set corresponding to the case data according to the characteristic information, taking the re-selected replacement data as the current replacement data, and generating the case to be verified by taking the current replacement data as the case data of the target test case.
In one embodiment, the computer program when executed by the processor further performs the steps of: if not, generating a warning for representing that the artificial repair is needed.
In one embodiment, when the computer program is executed by the processor to analyze the test failure reason of the target test case, the following steps are specifically implemented: acquiring a plurality of execution data of a target test case; wherein the execution data comprises use case data; respectively calling corresponding scoring models according to the execution data, and respectively carrying out data validity scoring on the execution data by utilizing the scoring models; and analyzing the test failure reason of the target test case according to the grading result of each grading model.
In one embodiment, when the computer program is executed by the processor to analyze the test failure reason of the target test case according to the scoring result of each scoring model, the following steps are specifically implemented: determining the weight of the target test case execution failure caused by each execution data according to the grade of each grading model; and determining the test failure reason of the target test case according to the execution data with the largest weight.
In one embodiment, the computer program when executed by the processor further performs the steps of: and if the analyzed test failure reason is not the case data abnormity, generating a warning for representing that manual repair is needed.
In one embodiment, the computer program when executed by the processor further performs the steps of: and if the analyzed test failure reason is not the case data abnormity, calling other abnormity repairing models which are not specific to the case data abnormity to repair the target test case.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for repairing a test case, the method comprising:
acquiring a target test case, wherein the target test case is a test case with execution failure times meeting set conditions;
analyzing the test failure reason of the target test case;
if the test failure reason is analyzed to be case data abnormity, case data of the target test case is obtained, and characteristic information is extracted from the case data;
selecting current replacement data from a standby test data set corresponding to the case data according to the characteristic information;
and replacing the case data of the target test case by using the current replacement data.
2. The method of claim 1, wherein the extracting feature information from the use case data comprises:
extracting request characteristic information from interface request data of the case data; and/or
Extracting response characteristic information from interface response data of the case data; and/or
And extracting attribute feature information of the case data from a data attribute dimension.
3. The method according to claim 1, wherein the selecting current replacement data from the standby test data set corresponding to the case data according to the characteristic information comprises:
converting the characteristic information into a characteristic vector, and calculating the similarity between the standby test data in the standby test data set and the case data according to the characteristic vector;
and selecting the current replacement data from the standby test data with the similarity larger than a preset threshold value.
4. The method of claim 1, wherein prior to said replacing the use case data of the target test case with the current replacement data, the method further comprises:
using the current replacement data as case data of the target test case, generating a case to be verified, and verifying whether the case to be verified can be successfully executed;
and if so, executing the step of replacing the case data of the target test case by using the current replacing data.
5. The method of claim 4, further comprising:
if not, re-selecting replacement data from a standby test data set corresponding to the case data according to the characteristic information, taking the re-selected replacement data as current replacement data, and entering the step of generating a case to be verified by taking the current replacement data as the case data of the target test case; or the like, or, alternatively,
and if not, generating first prompt information representing that the test failure reason is abnormal by the use case data, wherein the first prompt information comprises the information representing that the test failure reason is abnormal by the use case data.
6. The method according to any one of claims 1 to 5, wherein the analyzing the reason for the test failure of the target test case comprises:
acquiring a plurality of execution data of the target test case; wherein the execution data comprises the use case data;
calling corresponding scoring models respectively according to the execution data, and performing data validity scoring on the execution data respectively by using the scoring models;
analyzing the test failure reason of the target test case according to the grading result of each grading model;
preferably, the analyzing the test failure reason of the target test case according to the scoring result of each scoring model includes:
determining the size of the weight of each execution data causing the target test case to fail to execute according to the grade of each scoring model;
and determining the test failure reason of the target test case according to the execution data with the maximum weight.
7. The method of claim 6, further comprising:
if the test failure reason is analyzed to be not the case data abnormity, second prompt information representing that manual repair is needed is generated, or other abnormity repair models which are not specific to the case data abnormity are called to repair the target test case; preferably, the second prompt message includes a reason for the test failure.
8. An apparatus for repairing a test case, the apparatus comprising:
the target test case acquisition module is used for acquiring a target test case, wherein the target test case is a test case of which the execution failure times meet set conditions;
the failure reason analysis module is used for analyzing the test failure reason of the target test case;
the characteristic information extraction module is used for acquiring the case data of the target test case and extracting characteristic information from the case data if the reason for the test failure is analyzed to be that the case data is abnormal;
the replacement data selection module is used for selecting current replacement data from a standby test data set corresponding to the case data according to the characteristic information;
and the case data replacing module is used for replacing the case data of the target test case by using the current replacing data.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 7 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202011231724.9A 2020-11-06 2020-11-06 Test case repairing method and device, computer equipment and storage medium Pending CN112363920A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011231724.9A CN112363920A (en) 2020-11-06 2020-11-06 Test case repairing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011231724.9A CN112363920A (en) 2020-11-06 2020-11-06 Test case repairing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112363920A true CN112363920A (en) 2021-02-12

Family

ID=74509437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011231724.9A Pending CN112363920A (en) 2020-11-06 2020-11-06 Test case repairing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112363920A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113392000A (en) * 2021-06-10 2021-09-14 卫宁健康科技集团股份有限公司 Test case execution result analysis method, device, equipment and storage medium
CN113836041A (en) * 2021-11-17 2021-12-24 四川启睿克科技有限公司 Method for improving robustness of automatic test case
CN114578210A (en) * 2022-02-25 2022-06-03 苏州浪潮智能科技有限公司 Mainboard test method, device, equipment and storage medium
CN115840715A (en) * 2023-02-27 2023-03-24 北京徐工汉云技术有限公司 Software test management method, device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203473A (en) * 2017-05-26 2017-09-26 四川长虹电器股份有限公司 The automatization test system and method for automatic expansion interface test case
CN108427613A (en) * 2018-03-12 2018-08-21 平安普惠企业管理有限公司 Exceptional interface localization method, device, computer equipment and storage medium
CN110489320A (en) * 2019-07-05 2019-11-22 深圳壹账通智能科技有限公司 Restoring method, device, terminal device and the medium of test data
CN110990575A (en) * 2019-12-18 2020-04-10 斑马网络技术有限公司 Test case failure reason analysis method and device and electronic equipment
CN111198813A (en) * 2018-11-20 2020-05-26 北京京东尚科信息技术有限公司 Interface testing method and device
CN111400116A (en) * 2020-03-10 2020-07-10 珠海全志科技股份有限公司 Chip test verification method, computer device and computer readable storage medium
CN111427333A (en) * 2020-04-01 2020-07-17 北京四维智联科技有限公司 Test method and device for Internet of vehicles service platform and computer storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203473A (en) * 2017-05-26 2017-09-26 四川长虹电器股份有限公司 The automatization test system and method for automatic expansion interface test case
CN108427613A (en) * 2018-03-12 2018-08-21 平安普惠企业管理有限公司 Exceptional interface localization method, device, computer equipment and storage medium
CN111198813A (en) * 2018-11-20 2020-05-26 北京京东尚科信息技术有限公司 Interface testing method and device
CN110489320A (en) * 2019-07-05 2019-11-22 深圳壹账通智能科技有限公司 Restoring method, device, terminal device and the medium of test data
CN110990575A (en) * 2019-12-18 2020-04-10 斑马网络技术有限公司 Test case failure reason analysis method and device and electronic equipment
CN111400116A (en) * 2020-03-10 2020-07-10 珠海全志科技股份有限公司 Chip test verification method, computer device and computer readable storage medium
CN111427333A (en) * 2020-04-01 2020-07-17 北京四维智联科技有限公司 Test method and device for Internet of vehicles service platform and computer storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113392000A (en) * 2021-06-10 2021-09-14 卫宁健康科技集团股份有限公司 Test case execution result analysis method, device, equipment and storage medium
CN113392000B (en) * 2021-06-10 2024-01-30 卫宁健康科技集团股份有限公司 Test case execution result analysis method, device, equipment and storage medium
CN113836041A (en) * 2021-11-17 2021-12-24 四川启睿克科技有限公司 Method for improving robustness of automatic test case
CN114578210A (en) * 2022-02-25 2022-06-03 苏州浪潮智能科技有限公司 Mainboard test method, device, equipment and storage medium
CN114578210B (en) * 2022-02-25 2024-02-02 苏州浪潮智能科技有限公司 Mainboard testing method, device, equipment and storage medium
CN115840715A (en) * 2023-02-27 2023-03-24 北京徐工汉云技术有限公司 Software test management method, device and storage medium
CN115840715B (en) * 2023-02-27 2023-05-05 北京徐工汉云技术有限公司 Software test management method, device and storage medium

Similar Documents

Publication Publication Date Title
CN112363920A (en) Test case repairing method and device, computer equipment and storage medium
CN110489314B (en) Model anomaly detection method and device, computer equipment and storage medium
CN110083514B (en) Software test defect evaluation method and device, computer equipment and storage medium
CN115511136B (en) Equipment fault auxiliary diagnosis method and system based on analytic hierarchy process and fault tree
CN113946499A (en) Micro-service link tracking and performance analysis method, system, equipment and application
CN112559364A (en) Test case generation method and device, computer equipment and storage medium
CN112148329A (en) Code version automatic updating method and device, computer equipment and storage medium
CN110990289B (en) Method and device for automatically submitting bug, electronic equipment and storage medium
CN115759357A (en) Power supply equipment safety prediction method, system, equipment and medium based on PSCADA data
CN115952081A (en) Software testing method, device, storage medium and equipment
CN114527974A (en) Method and device for realizing service function of software product and computer equipment
JP7190246B2 (en) Software failure prediction device
CN110908903B (en) Test method based on editable YAML file
CN116345690B (en) Power monitoring false alarm identification method and system based on power supply system equipment list
CN112395125A (en) Method and device for notifying page error report, computer equipment and storage medium
KR101834247B1 (en) Method and apparatus for analyzing safety of automotive software
CN114937043B (en) Equipment defect detection method, device, equipment and medium based on artificial intelligence
CN116431522A (en) Automatic test method and system for low-code object storage gateway
US11934302B2 (en) Machine learning method to rediscover failure scenario by comparing customer's server incident logs with internal test case logs
CN106708638B (en) System error detection method and device
CN115757138A (en) Method and device for determining script abnormal reason, storage medium and electronic equipment
CN110865939B (en) Application program quality monitoring method, device, computer equipment and storage medium
CN109783263B (en) Method and system for processing aging test fault of server
KR20210134466A (en) Method and System of Decision-Making for Establishing Maintenance Strategy of Power Generation Facilities
CN107102938B (en) Test script updating method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination