CN116303029A - Analysis method and device for test cases, processor and electronic equipment - Google Patents

Analysis method and device for test cases, processor and electronic equipment Download PDF

Info

Publication number
CN116303029A
CN116303029A CN202310274461.7A CN202310274461A CN116303029A CN 116303029 A CN116303029 A CN 116303029A CN 202310274461 A CN202310274461 A CN 202310274461A CN 116303029 A CN116303029 A CN 116303029A
Authority
CN
China
Prior art keywords
test case
failure
data
execution
test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310274461.7A
Other languages
Chinese (zh)
Inventor
张闽珺
周朝信
高梦杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202310274461.7A priority Critical patent/CN116303029A/en
Publication of CN116303029A publication Critical patent/CN116303029A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application discloses a test case analysis method, a test case analysis device, a test case processor and electronic equipment. Relates to the field of artificial intelligence, and the method comprises the following steps: acquiring time sequence data, text data and attribute data of a failed test case, wherein the time sequence data is data representing stability of the test case in the execution process, the text data is data representing the specificity of a log in the test case in the execution process, and the attribute data is data representing the uniqueness of the test case; inputting time sequence data, text data and attribute data into a target model to obtain failure reasons, wherein the target model is trained by a plurality of groups of training samples, and each group of training samples comprises time sequence data, text data, attribute data and failure reasons of a historical failure test case; and adjusting the execution method of the test case based on the failure reason. According to the method and the device, the problem of low efficiency in analyzing the failure reasons of the failure test cases in the related technology is solved.

Description

Analysis method and device for test cases, processor and electronic equipment
Technical Field
The application relates to the field of artificial intelligence, in particular to a test case analysis method, a test case analysis device, a test case processor and an electronic device.
Background
In the related technology, the failure reason analysis method for the failure test cases mainly comprises the following methods: the first is attribution of the failed use case based on keyword/word rule matching, a tester filters the log message information by configuring keywords/words, and if the keywords/words configured by the tester exist, the failed use case is marked with a corresponding reason label. The second is based on the failure case attribution of the error stack classification, and the text analysis is carried out on the control desk log and the error stack related to the failure test case, so as to achieve the purpose of obtaining the cause of the failure case. The third reason is failure use case attribution based on Bug reports. In the software maintenance process of large software programs, bug reports become an important medium for helping developers to solve Bug. The user may report the software error in a fixed format (i.e., bug report) and upload it to the Bug tracking system. A senior developer is then assigned to understand and fix the reported errors based on the information displayed in the submitted report.
However, in the process of executing an automatic test script, the automatic test script is often affected by low environmental stability, failure of a test data state and the like, so that the problems of complex failure cause investigation links, repeated analysis of similar failure causes and various failure causes are caused. The above failure case attribution methods have respective defects, for example, the first method based on manual rule matching has high cost and no generalization capability. The second way based on error stack classification cannot take into account problems occurring in the whole flow of automated testing, especially stability problems occurring in the whole automated execution process. The third method based on the Bug report analysis needs to consume a certain labor cost to generate Bug reports and analyze, and when a large number of failure use cases occur, the analysis efficiency of the method is low.
Aiming at the problem of low efficiency in analyzing failure reasons of failure test cases in the related technology, no effective solution is proposed at present.
Disclosure of Invention
The main purpose of the application is to provide a method, a device, a processor and electronic equipment for analyzing test cases, so as to solve the problem of low efficiency in analyzing failure reasons of failure test cases in related technologies.
In order to achieve the above object, according to one aspect of the present application, there is provided an analysis method of a test case. The method comprises the following steps: acquiring time sequence data, text data and attribute data of a failed test case, wherein the time sequence data is data representing stability of the test case in the execution process, the text data is data representing the specificity of a log in the test case in the execution process, and the attribute data is data representing the uniqueness of the test case; inputting time sequence data, text data and attribute data into a target model to obtain failure reasons, wherein the target model is trained by a plurality of groups of training samples, and each group of training samples comprises time sequence data, text data, attribute data and failure reasons of a historical failure test case; and adjusting the execution method of the test case based on the failure reason.
Optionally, acquiring the time sequence data of the failure test case includes: acquiring the execution time and the execution state of each execution process of the failure test case in a preset period, wherein the execution state comprises execution success and execution failure; determining a first execution time when the failure test case is found, determining a group of second execution times with execution states being execution failures, and determining the adjacent execution state of each second execution time as a third execution time with successful execution, so as to obtain a plurality of third execution times; calculating the difference value between each third execution time and the first execution time to obtain a group of difference values; determining a set of target values of the differences, and determining the target values as time series data, wherein the target values at least comprise one of the following: maximum, minimum, median and variance.
Optionally, obtaining text data of the failed test case includes: obtaining a target log associated with the failure test case and response body information returned by a console running the failure test case, wherein the target log at least comprises one of the following components: integrating tool dispatching logs, console logs and error reporting stack logs; extracting at least one keyword from the target log, and inputting the keyword into a preset semantic extraction model to obtain semantic features; keywords, semantic features, and response body information are determined as text data.
Optionally, obtaining attribute data of the failed test case includes: determining a test framework type, an application type, a test progress stage, a state during the loading, a communication problem state, a response body return code, a uniform resource locator feature and an interface type of the failed test case; at least one of a test framework type, an application type, a test progress stage, a during-board state, a communication problem state, a response body return code, a uniform resource locator feature, and an interface type is determined as attribute data.
Optionally, before inputting the time series data, the text data and the attribute data into the object model, the method further comprises: acquiring each historical failure test case and a corresponding historical failure reason, and extracting time sequence data, text data and attribute data in the historical failure test cases; taking time sequence data, text data, attribute data and historical failure reasons in the historical failure test case as a group of training samples to obtain a plurality of groups of training samples; and inputting a plurality of groups of training samples into a preset model for training to obtain a target model.
Optionally, before adjusting the execution method of the test case based on the failure cause, the method further includes: determining the actual failure reason of the failure test case through a preset rule; judging whether the failure reason is the same as the actual failure reason; and under the condition that the failure reasons are different from the actual failure reasons, adjusting the execution method of the test case based on the actual failure reasons.
Optionally, in the case that the failure cause is different from the actual failure cause, the method further includes: obtaining a plurality of groups of correction samples in a preset period, wherein each group of correction samples comprises a failure test case and an actual failure reason; correcting the target model based on a plurality of groups of correction samples to obtain a corrected target model; and replacing the target model with the corrected target model.
In order to achieve the above object, according to another aspect of the present application, there is provided an analysis apparatus for a test case. The device comprises: the acquisition unit is used for acquiring time sequence data, text data and attribute data of the failed test case, wherein the time sequence data is data representing stability of the test case in the execution process, the text data is data representing the specificity of a log in the test case in the execution process, and the attribute data is data representing the uniqueness of the test case; the input unit is used for inputting the time sequence data, the text data and the attribute data into the target model to obtain the failure reason, wherein the target model is obtained by training a plurality of groups of training samples, and each group of training samples comprises the time sequence data, the text data, the attribute data and the failure reason of one historical failure test case; and the adjusting unit is used for adjusting the execution method of the test case based on the failure reason.
Through the application, the following steps are adopted: acquiring time sequence data, text data and attribute data of a failed test case, wherein the time sequence data is data representing stability of the test case in the execution process, the text data is data representing the specificity of a log in the test case in the execution process, and the attribute data is data representing the uniqueness of the test case; inputting time sequence data, text data and attribute data into a target model to obtain failure reasons, wherein the target model is trained by a plurality of groups of training samples, and each group of training samples comprises time sequence data, text data, attribute data and failure reasons of a historical failure test case; the execution method for adjusting the test cases based on the failure reasons solves the problem of low efficiency in analyzing the failure reasons of the failure test cases in the related technology. The time sequence data, the text data and the attribute data of the failure test cases are input into the target model to comprehensively analyze the failure reasons, so that the effect of analyzing the failure reasons of the test cases more comprehensively and efficiently is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, illustrate and explain the application and are not to be construed as limiting the application. In the drawings:
FIG. 1 is a flow chart of a method of analyzing test cases provided in accordance with an embodiment of the present application;
FIG. 2 is a schematic illustration of feature engineering provided in accordance with an embodiment of the present application;
FIG. 3 is a flow chart of a method for predicting the cause of an automated test failure use case provided in accordance with an embodiment of the present application;
FIG. 4 is a schematic diagram of an analysis device for test cases provided according to an embodiment of the present application;
fig. 5 is a schematic diagram of an electronic device provided according to an embodiment of the present application.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate in order to describe the embodiments of the present application described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for presentation, analyzed data, etc.) related to the present disclosure are information and data authorized by the user or sufficiently authorized by each party.
The present invention will be described with reference to preferred implementation steps, and fig. 1 is a flowchart of a method for analyzing a test case according to an embodiment of the present application, as shown in fig. 1, where the method includes the following steps:
Step S101, time sequence data, text data and attribute data of a failed test case are obtained, wherein the time sequence data is data representing stability of the test case in the execution process, the text data is data representing the specificity of a log in the test case in the execution process, and the attribute data is data representing the uniqueness of the test case.
Specifically, since only the keyword/word and the error stack information are considered, the failure reason of the failure case in the whole test case operation process cannot be judged, in order to further capture the reason of the failure case multisource, time sequence analysis can be performed through the running state and time of the test case, namely, the running stability of the test case in a test month version is measured through the time sequence data of the failure test case, so that the environmental stability problem is captured. The specificity of different logs in the execution process of the case is measured by analyzing the key words and semantic features of the logs of the control console, the error stack and the returned information of the response body, namely by text data of the failed test case; other reasons that a failed use case may exist are analyzed by combining individual automated schedule inherent attributes, i.e., attribute data.
Step S102, inputting time sequence data, text data and attribute data into a target model to obtain a failure reason, wherein the target model is trained by a plurality of groups of training samples, and each group of training samples comprises the time sequence data, the text data, the attribute data and the failure reason of one historical failure test case.
Specifically, the target model may be a neural network model, and training the time sequence data, the text data, the attribute data and the failure reason of the historical failure test case to obtain the target model capable of directly outputting the failure reason of the failure test case according to the input time sequence data, the text data and the attribute data.
In order to solve the problems of the prior art in terms of labor cost, dead zones of failure reasons and the like, the debugging efficiency of testers in the whole automatic process is improved. And training a target model according to the characteristic data, so that the target model does not need manual intervention cause analysis, can perfect the environmental problem which is difficult to capture in the whole process of the automatic test, and helps test staff to check failure causes from the aspects of environment, data, function change and script.
Step S103, the execution method of the test case is adjusted based on the failure reason.
Specifically, after the identification reason of the failure test case is determined, the failure reason is recorded, and the execution method of the subsequent test case is correspondingly adjusted to avoid the same failure reason from appearing again.
According to the analysis method for the test cases, time sequence data, text data and attribute data of the failed test cases are obtained, wherein the time sequence data are data representing stability of the test cases in the execution process, the text data are data representing the particularities of logs in the test case execution process, and the attribute data are data representing the particularities of the test cases; inputting time sequence data, text data and attribute data into a target model to obtain failure reasons, wherein the target model is trained by a plurality of groups of training samples, and each group of training samples comprises time sequence data, text data, attribute data and failure reasons of a historical failure test case; the execution method for adjusting the test cases based on the failure reasons solves the problem of low efficiency in analyzing the failure reasons of the failure test cases in the related technology. The time sequence data, the text data and the attribute data of the failure test cases are input into the target model to comprehensively analyze the failure reasons, so that the effect of analyzing the failure reasons of the test cases more comprehensively and efficiently is achieved.
In consideration of the fact that the time sequence data of the failed test case needs to be acquired due to the factor of the running stability of the test case, optionally, in the analysis method of the test case provided by the embodiment of the application, acquiring the time sequence data of the failed test case includes: acquiring the execution time and the execution state of each execution process of the failure test case in a preset period, wherein the execution state comprises execution success and execution failure; determining a first execution time when the failure test case is found, determining a group of second execution times with execution states being execution failures, and determining the adjacent execution state of each second execution time as a third execution time with successful execution, so as to obtain a plurality of third execution times; calculating the difference value between each third execution time and the first execution time to obtain a group of difference values; determining a set of target values of the differences, and determining the target values as time series data, wherein the target values at least comprise one of the following: maximum, minimum, median and variance.
Specifically, the preset period may be one month, by collecting time sequence data in an automatic test process, the execution time and the execution state of a failed test case in a test month version in a plurality of different execution processes are obtained, and the collected time sequence data is processed and analyzed. In a test month version, the test month version comprises a plurality of execution processes, each execution process records a corresponding execution state and execution time, different execution times of failed test cases in the month version comprise execution time at t-n time and current time at t time, the execution time at t-n time corresponds to a historical running state, the time at current t time corresponds to the execution state at t time when the current failed test cases appear, and each historical running state comprises states of successful execution and failure execution. And calculating a group of differences between the current t moment, namely the first execution time and the third execution time, determining a target value of the group of differences as time sequence data, wherein the target value can be the maximum value, the minimum value, the median, the variance and the like in the group of differences, and analyzing the failure cause caused by environmental factors by measuring the environmental stability change of the current test case in the whole monthly version execution process through the time sequence data.
In order to comprehensively consider the specificity of the log when analyzing the failure test case, text data of the failure test case needs to be acquired, optionally, in the analysis method of the test case provided by the embodiment of the application, acquiring the text data of the failure test case includes: obtaining a target log associated with the failure test case and response body information returned by a console running the failure test case, wherein the target log at least comprises one of the following components: integrating tool dispatching logs, console logs and error reporting stack logs; extracting at least one keyword from the target log, and inputting the keyword into a preset semantic extraction model to obtain semantic features; keywords, semantic features, and response body information are determined as text data.
Specifically, the integrated tool scheduling log may be a Jenkins scheduling log, the preset semantic extraction model may be a BERT (Bidrectional Encoder Representation from Transformers, pre-training natural language processing model) model, and some text information related to the execution process of a failed test case in one test month version of the automated test process is obtained, and the text information is derived from different target logs, extracts key information for different log records, and analyzes and mines the key information. In the whole automatic test process, the Jenkins dispatch log, the console log and the error stack log are involved. The Jenkins scheduling log is usually larger, so that information mining is performed by adopting a keyword extraction mode, namely, analysis is performed according to whether keywords appear in the integrated tool scheduling log, for example, a final keyword list is obtained by utilizing keyword weight and word frequency to obtain a union. Console logs typically have a fixed format, such as msg, code fields, etc., where msg is a very important key information through which text information in the console log is extracted. The error stack log generally has information specificity, for example, important information related to root failure reasons exists in sentences related to words such as exceptions. After extracting important texts from the target logs in a regular matching mode, extracting semantic features through BERT models in the field of natural language processing, and determining keywords, semantic features and response body information as text data. Input parameters considering the specificity of the log are provided for analyzing the failure cause by acquiring text data.
In consideration of the fact that the factor of the uniqueness of the test case needs to obtain the attribute data of the failed test case, optionally, in the analysis method of the test case provided by the embodiment of the application, obtaining the attribute data of the failed test case includes: determining a test framework type, an application type, a test progress stage, a state during the loading, a communication problem state, a response body return code, a uniform resource locator feature and an interface type of the failed test case; at least one of a test framework type, an application type, a test progress stage, a during-board state, a communication problem state, a response body return code, a uniform resource locator feature, and an interface type is determined as attribute data.
Specifically, test framework types such as a modular test framework, a data driven framework, a keyword driven framework, a hybrid model, etc., different framework types, different application types, different test progress stages, different during-plate states, different communication problem states, different response body return codes, different uniform resource locator features, and different interface types may all cause different test case failure reasons, so at least one of the above data may be selected as attribute data. And providing input parameters considering the uniqueness of the test case for analyzing the failure reasons by acquiring the attribute data.
Optionally, in the method for analyzing a test case provided by the embodiment of the present application, before inputting time series data, text data and attribute data into the target model, the method further includes: acquiring each historical failure test case and a corresponding historical failure reason, and extracting time sequence data, text data and attribute data in the historical failure test cases; taking time sequence data, text data, attribute data and historical failure reasons in the historical failure test case as a group of training samples to obtain a plurality of groups of training samples; and inputting a plurality of groups of training samples into a preset model for training to obtain a target model.
Specifically, when a training sample is obtained, the stability of case operation, the specificity of a log and the inherent characteristics of the case are comprehensively considered through feature engineering, and feature derivative integration is performed on the data of the historical failure test case, wherein the data comprises case time sequence data, text data and attribute data. The case time sequence data comprehensively considers the historical running state, the current running state and the derivative characteristics of the failed test case, the text data comprehensively considers the keywords, the semantic characteristics and the response body information returned by the control console of the log, and the attribute data comprehensively considers the test progress stage, the communication problem state, the response body return code, the uniform resource locator characteristics, the interface types and the like. Fig. 2 is a schematic diagram of feature engineering provided according to an embodiment of the present application, and as shown in fig. 2, word frequency features, word vector features, time difference features, keyword features, and statistical features are extracted from data of a history failure test case. The method comprises the steps of obtaining time sequence data, text data and attribute data based on feature engineering, taking the time sequence data, the text data and the attribute data and corresponding failure reasons as a group of training samples, inputting a plurality of groups of training samples corresponding to a plurality of historical failure test cases into a preset neural network model for training, and obtaining a target model. And analyzing the failure reason of the current failure test case by training the target model.
In order to ensure the accuracy of the failure reasons, the actual failure reasons need to be determined according to preset rules, optionally, in the analysis method of the test case provided in the embodiment of the present application, before the execution method of the test case is adjusted based on the failure reasons, the method further includes: determining the actual failure reason of the failure test case through a preset rule; judging whether the failure reason is the same as the actual failure reason; and under the condition that the failure reasons are different from the actual failure reasons, adjusting the execution method of the test case based on the actual failure reasons.
Specifically, the preset rule may be a rule summarized by a tester according to expert experience, and the test case is analyzed through the preset rule to obtain an actual failure reason, and the execution method of the test case is adjusted based on the actual failure reason. The analysis accuracy of the target model is guaranteed by determining the actual failure cause.
Optionally, in the method for analyzing a test case provided in the embodiment of the present application, when a failure reason is different from an actual failure reason, the method further includes: obtaining a plurality of groups of correction samples in a preset period, wherein each group of correction samples comprises a failure test case and an actual failure reason; correcting the target model based on a plurality of groups of correction samples to obtain a corrected target model; and replacing the target model with the corrected target model.
Specifically, under the condition that the failure reasons are different from the actual failure reasons, the failure reason analysis of the current failure test cases by the target model is described to be wrong, the target model needs to be corrected in time at the moment, a plurality of groups of correction samples in one day are collected to train the target model by taking the failure test cases with the failure reasons different from the actual failure reasons as correction samples, and the corrected target model is obtained. And the analysis accuracy of the target model is ensured by correcting the target model.
According to another embodiment of the present application, a method for predicting a cause of an automatic test failure case is provided, and fig. 3 is a flowchart of the method for predicting a cause of an automatic test failure case according to an embodiment of the present application. As shown in fig. 3, the method includes:
after failure cases and models of failure reasons on the first day of matching are predicted, failure cases which are checked by a tester are input into an alpha model for training to obtain a beta model, and the beta model is used as a prediction model of the second day. And inputting the test cases which are obtained in real time on the second day and are not matched with the failure reasons and all the test cases which are not matched with the failure reasons on the first day into a beta version model prediction result, updating the failure reasons according to the prediction result, checking the failure reasons by a tester, and taking the checked failure cases as training samples on the next day.
The situation of the full life cycle of the current failed use case is comprehensively considered through the reason prediction method of the automatic test failed use case. Not only is the Bug content analyzed, but also the stability of case operation, the specificity of the log and the inherent characteristics of the case are comprehensively considered, so that the failure reasons of the test cases are more comprehensively and efficiently analyzed.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
The embodiment of the application also provides an analysis device for the test case, and it should be noted that the analysis device for the test case in the embodiment of the application can be used for executing the analysis method for the test case provided in the embodiment of the application. The following describes an analysis device for test cases provided in the embodiments of the present application.
Fig. 4 is a schematic diagram of an analysis device for test cases according to an embodiment of the present application. As shown in fig. 4, the apparatus includes:
the acquiring unit 10 is configured to acquire time sequence data, text data and attribute data of a failed test case, where the time sequence data is data representing stability of the test case in an execution process, the text data is data representing specificity of a log in the test case in the execution process, and the attribute data is data representing uniqueness of the test case;
An input unit 20, configured to input the time sequence data, the text data and the attribute data into a target model to obtain a failure cause, where the target model is obtained by training multiple sets of training samples, and each set of training samples includes the time sequence data, the text data, the attribute data and the failure cause of one historical failure test case;
an adjusting unit 30, configured to adjust an execution method of the test case based on the failure reason.
According to the analysis device for the test cases, the time sequence data, the text data and the attribute data of the failed test cases are acquired through the acquisition unit 10, wherein the time sequence data are data representing stability of the test cases in the execution process, the text data are data representing the particularities of logs in the test case execution process, and the attribute data are data representing the uniqueness of the test cases; the input unit 20 inputs the time sequence data, the text data and the attribute data into a target model to obtain a failure reason, wherein the target model is obtained by training a plurality of groups of training samples, and each group of training samples comprises the time sequence data, the text data, the attribute data and the failure reason of one historical failure test case; the adjustment unit 30 adjusts the execution method of the test case based on the failure cause, solves the problem of low efficiency in analyzing the failure cause for the failure test case in the related art, and achieves the effect of analyzing the failure cause of the test case more comprehensively and efficiently by inputting the time sequence data, the text data and the attribute data of the failure test case into the target model to comprehensively analyze the failure cause.
Optionally, in the analysis device for a test case provided in the embodiment of the present application, the obtaining unit 10 includes: the first acquisition module is used for acquiring the execution time and the execution state of the failed test case in a preset period, wherein the execution state comprises execution success and execution failure; the first determining module is used for determining a group of first execution time of each execution process with the execution state failing to execute and determining a plurality of second execution time with the execution state successful to execute; the calculation module is used for calculating the difference value between each third execution time and the first execution time to obtain a group of difference values; a second determining module, configured to determine a target value of a set of differences, and determine the target value as time series data, where the target value includes at least one of: maximum, minimum, median and variance.
Optionally, in the analysis device for a test case provided in the embodiment of the present application, the obtaining unit 10 includes: the second acquisition module is used for acquiring a target log associated with the failure test case and response body information returned by a console running the failure test case, wherein the target log at least comprises one of the following components: integrating tool dispatching logs, console logs and error reporting stack logs; the extraction module is used for extracting at least one keyword from the target log, inputting the keyword into a preset semantic extraction model and obtaining semantic features; and the third determining module is used for determining the keywords, the semantic features and the response body information as text data.
Optionally, in the analysis device for a test case provided in the embodiment of the present application, the obtaining unit 10 includes: the fourth determining module is used for determining the type of the test framework, the type of the application, the test progress stage, the state during the board installation, the state of the communication problem, the response body return code, the uniform resource locator characteristic and the interface type of the failed test case; and a fifth determining module for determining at least one of a test framework type, an application type, a test progress stage, a during-board state, a communication problem state, a response body return code, a uniform resource locator feature, and an interface type as attribute data.
Optionally, in the test case analysis device provided in the embodiment of the present application, the device further includes: the historical failure cause acquisition unit is used for acquiring each historical failure test case and the corresponding historical failure cause and extracting time sequence data, text data and attribute data in the historical failure test cases; the first determining unit is used for taking time sequence data, text data, attribute data and historical failure reasons in the historical failure test cases as a group of training samples to obtain a plurality of groups of training samples; the training unit is used for inputting a plurality of groups of training samples into the preset model for training to obtain a target model.
Optionally, in the test case analysis device provided in the embodiment of the present application, the device further includes: the second determining unit is used for determining the actual failure reason of the failure test case through a preset rule; the judging unit is used for judging whether the failure reason is the same as the actual failure reason; the actual failure reason adjusting unit is used for adjusting the execution method of the test case based on the actual failure reason under the condition that the failure reason is different from the actual failure reason.
Optionally, in the test case analysis device provided in the embodiment of the present application, the device further includes: the correction sample acquisition unit is used for acquiring a plurality of groups of correction samples in a preset period, wherein each group of correction samples comprises a failure test case and an actual failure reason; the correction unit is used for correcting the target model based on a plurality of groups of correction samples to obtain a corrected target model; and the replacing unit is used for replacing the target model with the corrected target model.
The analysis device for test cases includes a processor and a memory, the acquisition unit 10, the input unit 20, the adjustment unit 30, and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to realize corresponding functions.
The processor includes a kernel, and the kernel fetches the corresponding program unit from the memory. The kernel can be provided with one or more than one kernel parameters, so that the failure reasons of the test cases can be analyzed more comprehensively and efficiently by adjusting the kernel parameters.
The memory may include volatile memory, random Access Memory (RAM), and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), among other forms in computer readable media, the memory including at least one memory chip.
The embodiment of the invention provides a computer readable storage medium, on which a program is stored, which when executed by a processor, implements a method for analyzing test cases.
The embodiment of the invention provides a processor, which is used for running a program, wherein the program runs to execute an analysis method of test cases.
The embodiment of the invention provides electronic equipment, and fig. 5 is a schematic diagram of the electronic equipment according to the embodiment of the application. As shown in fig. 5, the device 501 includes a processor, a memory, and a program stored on the memory and executable on the processor, and the processor implements the following steps when executing the program: an analysis method of test cases. The device herein may be a server, PC, PAD, cell phone, etc.
The present application also provides a computer program product adapted to perform, when executed on a data processing device, a program initialized with the method steps of: an analysis method of test cases.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (10)

1. A method of analyzing a test case, comprising:
acquiring time sequence data, text data and attribute data of a failed test case, wherein the time sequence data is data representing stability of the test case in the execution process, the text data is data representing the specificity of a log in the test case in the execution process, and the attribute data is data representing the uniqueness of the test case;
Inputting the time sequence data, the text data and the attribute data into a target model to obtain a failure reason, wherein the target model is trained by a plurality of groups of training samples, and each group of training samples comprises the time sequence data, the text data, the attribute data and the failure reason of one historical failure test case;
and adjusting the execution method of the test case based on the failure reason.
2. The method of claim 1, wherein obtaining time series data for a failed test case comprises:
acquiring the execution time and the execution state of each execution process of the failure test case in a preset period, wherein the execution state comprises execution success and execution failure;
determining a first execution time when the failure test case is found, determining a group of second execution times with execution states being execution failures, and determining the adjacent execution state of each second execution time as a third execution time with successful execution, so as to obtain a plurality of third execution times;
calculating the difference value between each third execution time and the first execution time to obtain a group of difference values;
determining a target value of the set of differences, the target value being determined as the time series data, wherein the target value comprises at least one of: maximum, minimum, median and variance.
3. The method of claim 1, wherein obtaining text data for a failed test case comprises:
obtaining a target log associated with the failure test case and response body information returned by a console running the failure test case, wherein the target log at least comprises one of the following: integrating tool dispatching logs, console logs and error reporting stack logs;
extracting at least one keyword from the target log, and inputting the keyword into a preset semantic extraction model to obtain semantic features;
and determining the keywords, the semantic features and the response body information as the text data.
4. The method of claim 1, wherein obtaining attribute data for a failed test case comprises:
determining the test framework type, the application type, the test progress stage, the state during the edition, the communication problem state, the response body return code, the uniform resource locator characteristic and the interface type of the failure test case;
determining at least one of the test framework type, the application type, the test progress stage, the during-imposition state, the communication problem state, the response body return code, the uniform resource locator feature, and the interface type as the attribute data.
5. The method of claim 1, wherein prior to entering the time series data, the text data, and the attribute data into a target model, the method further comprises:
acquiring each historical failure test case and a corresponding historical failure reason, and extracting time sequence data, text data and attribute data in the historical failure test cases;
taking time sequence data, text data, attribute data and the historical failure reasons in the historical failure test case as a group of training samples to obtain a plurality of groups of training samples;
and inputting the multiple groups of training samples into a preset model for training to obtain the target model.
6. The method of claim 1, wherein prior to adjusting the execution method of the test case based on the failure cause, the method further comprises:
determining the actual failure reason of the failure test case through a preset rule;
judging whether the failure reason is the same as the actual failure reason;
in case the failure cause is not the same as the actual failure cause,
and adjusting the execution method of the test case based on the actual failure reason.
7. The method of claim 6, wherein in the event that the cause of failure is not the same as the actual cause of failure, the method further comprises:
Obtaining a plurality of groups of correction samples in a preset period, wherein each group of correction samples comprises a failure test case and an actual failure reason;
correcting the target model based on the plurality of groups of correction samples to obtain a corrected target model;
and replacing the target model with the corrected target model.
8. An analysis device for a test case, comprising:
the acquisition unit is used for acquiring time sequence data, text data and attribute data of the failed test case, wherein the time sequence data is data representing stability of the test case in the execution process, the text data is data representing the specificity of a log in the test case in the execution process, and the attribute data is data representing the uniqueness of the test case;
the input unit is used for inputting the time sequence data, the text data and the attribute data into a target model to obtain a failure reason, wherein the target model is trained by a plurality of groups of training samples, and each group of training samples comprises the time sequence data, the text data, the attribute data and the failure reason of one historical failure test case;
and the adjusting unit is used for adjusting the execution method of the test case based on the failure reason.
9. A processor, wherein the processor is configured to run a program, wherein the program, when run, performs the method of analyzing test cases according to any one of claims 1 to 7.
10. An electronic device comprising one or more processors and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of analyzing test cases of any of claims 1-7.
CN202310274461.7A 2023-03-17 2023-03-17 Analysis method and device for test cases, processor and electronic equipment Pending CN116303029A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310274461.7A CN116303029A (en) 2023-03-17 2023-03-17 Analysis method and device for test cases, processor and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310274461.7A CN116303029A (en) 2023-03-17 2023-03-17 Analysis method and device for test cases, processor and electronic equipment

Publications (1)

Publication Number Publication Date
CN116303029A true CN116303029A (en) 2023-06-23

Family

ID=86832050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310274461.7A Pending CN116303029A (en) 2023-03-17 2023-03-17 Analysis method and device for test cases, processor and electronic equipment

Country Status (1)

Country Link
CN (1) CN116303029A (en)

Similar Documents

Publication Publication Date Title
EP3798846B1 (en) Operation and maintenance system and method
US10977162B2 (en) Real time application error identification and mitigation
CN110471652B (en) Task arrangement method, task arranger, task arrangement device and readable storage medium
CN112052172B (en) Rapid test method and device for third-party channel and electronic equipment
CN114840853B (en) Digital business analysis method based on big data and cloud server
CN115952081A (en) Software testing method, device, storage medium and equipment
CN114691403A (en) Server fault diagnosis method and device, electronic equipment and storage medium
CN113032257A (en) Automatic test method, device, computer system and readable storage medium
US11790249B1 (en) Automatically evaluating application architecture through architecture-as-code
CN111209180B (en) Regression testing method and device based on fuzzy matching
CN116756021A (en) Fault positioning method and device based on event analysis, electronic equipment and medium
CN116662203A (en) Test method, test device, computer equipment and storage medium
CN116225958A (en) Fault prediction method and device, storage medium and electronic equipment
CN116627804A (en) Test method, system, electronic equipment and storage medium based on artificial intelligence
CN110908903A (en) Test method based on editable YAML file
CN110968518A (en) Analysis method and device for automatic test log file
CN116303029A (en) Analysis method and device for test cases, processor and electronic equipment
CN115271277A (en) Power equipment portrait construction method and system, computer equipment and storage medium
CN115309661A (en) Application testing method and device, electronic equipment and readable storage medium
CN114490413A (en) Test data preparation method and device, storage medium and electronic equipment
Sudan et al. Prediction of success and complex event processing in E-learning
WO2022122174A1 (en) Methods and apparatuses for troubleshooting a computer system
Munir et al. Log attention–assessing software releases with attention-based log anomaly detection
CN111309623B (en) Coordinate class data classification test method and device
CN111324523B (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination