CN113392009A - Exception handling method and device for automatic test - Google Patents

Exception handling method and device for automatic test Download PDF

Info

Publication number
CN113392009A
CN113392009A CN202110685533.8A CN202110685533A CN113392009A CN 113392009 A CN113392009 A CN 113392009A CN 202110685533 A CN202110685533 A CN 202110685533A CN 113392009 A CN113392009 A CN 113392009A
Authority
CN
China
Prior art keywords
data
test data
execution
result
test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110685533.8A
Other languages
Chinese (zh)
Inventor
江明旭
车洋
张展
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110685533.8A priority Critical patent/CN113392009A/en
Publication of CN113392009A publication Critical patent/CN113392009A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

An exception handling method and device for automatic testing relates to the field of artificial intelligence and can be used in the financial field or other fields. The method comprises the following steps: reading the test data in the test data set one by using the obtained test data set, and obtaining a result label corresponding to each test data; if the result label corresponding to the test data is failed or empty, executing the test data, and collecting the result data of the test data in the executing process; if the result data is successful, updating the result label corresponding to the test data to be successful; and if the result data is failure, repeatedly executing the test data, and acquiring the execution data until the repeated execution times reach a preset value or the result data of the test data is successful. The method and the device improve the stability and the efficiency of processing the abnormity when the abnormity occurs in the automatic test, facilitate the follow-up tracking of case failure reasons, record the test data successfully executed, skip the successful execution record when the program is operated next time, avoid the repeated execution and improve the operation efficiency.

Description

Exception handling method and device for automatic test
Technical Field
The present invention relates to the field of automated testing technologies, and in particular, to an exception handling method and apparatus for automated testing.
Background
In an automatic test, various abnormal conditions often occur, which causes the execution of test cases to fail, if each abnormal condition is processed independently, time and labor are often consumed, and various unexpected other abnormal conditions also occur, so that a mechanism capable of processing various exceptions generally is needed, and the stability of the automatic test and the efficiency of exception processing are improved.
Meanwhile, after the parameterization configuration, when a plurality of test cases are executed in a circulating manner, cases which are successfully executed and failed are not recorded clearly, so that the case process cannot be tracked quickly and the failure reason cannot be found when the case execution result fails and the failure or abnormality occurs in which link; meanwhile, when the case execution fails, it is not known which cases are executed successfully, and all cases still need to be executed from the beginning in the next re-running procedure.
Disclosure of Invention
In view of the problems in the prior art, embodiments of the present invention provide an exception handling method and apparatus for an automatic test, which effectively solve the problems of exception handling and ambiguous record of execution information in the automatic test.
In order to achieve the above object, an embodiment of the present invention provides an exception handling method for an automation test, where the method includes:
reading the test data in the test data set one by using the obtained test data set, and obtaining a result label corresponding to each test data;
if the result label corresponding to the test data is found to be failed or empty, executing the test data, and collecting the result data of the test data in the executing process;
if the result data is successful, updating a result label corresponding to the test data to be successful;
and if the result data is failure, repeatedly executing the test data, and collecting execution data corresponding to the test data until the repeated execution times reach a preset value or the result data of the test data is success.
Optionally, in an embodiment of the present invention, the executing the test data and collecting result data of the test data in the executing process includes:
executing the test data, and collecting detailed data and result data of the test data in the executing process; the detail data comprises screenshot data and page information.
Optionally, in an embodiment of the present invention, the acquiring detailed data of the test data in the execution process includes:
screenshot is carried out on a test data execution page according to a preset time interval to obtain screenshot data, and character recognition is carried out on the screenshot data to obtain character information;
and performing node identification on the test data execution page according to a preset time interval to obtain a character attribute rule, and taking the character information and the character attribute rule as page information.
Optionally, in an embodiment of the present invention, the method further includes: and inputting the execution data into a pre-established failure reason analysis model to obtain a failure reason corresponding to the test data.
Optionally, in an embodiment of the present invention, the execution data includes an error code, a response time, popup information, and log data.
Optionally, in an embodiment of the present invention, the failure cause analysis model is established as follows:
acquiring historical failure data and failure reasons and execution data corresponding to the historical failure data, and taking the historical failure data and the failure reasons and the execution data corresponding to the historical failure data as training samples;
and inputting the training samples into an initial artificial intelligence model for model training to obtain the failure reason analysis model.
The embodiment of the invention also provides an exception handling device for automatic testing, which comprises:
the test data module is used for reading the test data in the test data set one by using the obtained test data set and obtaining a result label corresponding to each test data;
the test execution module is used for executing the test data and collecting result data of the test data in the execution process if the result tag corresponding to the test data is failed or empty;
the label updating module is used for updating the result label corresponding to the test data to be successful if the result data is successful;
and the repeated execution module is used for repeatedly executing the test data if the result data is failed, and acquiring execution data corresponding to the test data until the repeated execution times reach a preset value or the result data of the test data is successful.
Optionally, in an embodiment of the present invention, the apparatus further includes: and the failure reason analysis module is used for inputting the execution data into a pre-established failure reason analysis model to obtain a failure reason corresponding to the test data.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method when executing the program.
The present invention also provides a computer-readable storage medium storing a computer program for executing the above method.
The method and the device improve the stability when the automatic test is abnormal and the efficiency of processing the abnormal by recording various information of the test data operation, facilitate the follow-up tracing of the case failure reason, simultaneously record the test data which is successfully executed, skip the successful execution record when the program is operated next time, avoid the repeated execution and improve the operation efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a flow chart of an exception handling method for automated testing according to an embodiment of the present invention;
FIG. 2 is a flow chart of collecting detailed data in an embodiment of the present invention;
FIG. 3 is a flowchart illustrating the establishment of a failure cause analysis model according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a system for exception handling using automated testing according to an embodiment of the present invention;
FIG. 5 is a flowchart of the operation of an execution module in an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a recording module according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of an exception handling apparatus for automatic testing according to an embodiment of the present invention;
FIG. 8 is a block diagram of an exception handling apparatus for automated testing according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides an exception handling method and device for an automatic test, which can be used in the financial field or other fields.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart illustrating an exception handling method for an automated test according to an embodiment of the present invention, where an execution subject of the exception handling method for an automated test provided by an embodiment of the present invention includes, but is not limited to, a computer. The method shown in the figure comprises the following steps:
step S1, reading the test data in the test data set one by using the obtained test data set, and obtaining a result tag corresponding to each test data.
The test data set can be stored in a database, and when the test case needs to be executed, the test data set is obtained from the database. The test data set comprises a plurality of test data, and each test data has a corresponding result label. Specifically, the test data in the test data set is read one by one, and the result tags corresponding to the test data are obtained from the database or other storage devices.
Step S2, if it is known that the result tag corresponding to the test data is failed or empty, executing the test data, and collecting result data of the test data in the execution process.
If the result label of the test data is successful, the test data is successfully operated before, and the test data does not need to be repeatedly operated again, so that the time is saved, and the efficiency is improved.
Further, the result tag may be success, failure, or null when the test data is first executed.
Further, if the result tag of the test data is failure, the test data is executed. Specifically, the test data whose result label is failure is configured, and after configuration, the test data is executed. In addition, if the result tag corresponding to the test data is null, it indicates that the test data has not been executed before, the test data is configured and executed, and the test data is executed after the configuration.
Further, when the test data is executed, result data in the execution process is collected. In particular, the result data may be success or failure.
Step S3, if the result data is successful, updating the result label corresponding to the test data to be successful.
If the result data is successful, it indicates that no exception occurs in the running process of the test data or the result is consistent with the set successful result, the test data is considered to be successfully executed, and the result label corresponding to the test data is updated to be successful.
Further, the updating of the result tag may be to directly modify the result tag to be successful, or the result tag may be a sequence, and the process of updating the result tag is to sequentially add the latest result in the sequence. For example, the result tag is { sequence number 1: [ 'fail', 'fail' ] }, which may be updated as { sequence number 1: [ 'failure', 'success' ] }.
And step S4, if the result data is failure, the test data is executed repeatedly, and the execution data corresponding to the test data is collected until the repeated execution times reach a preset value or the result data of the test data is success.
If the result data is failure, it indicates that an abnormal error report occurs in the test data execution process or the result is inconsistent with the set success result, and the test data execution is considered to be failed.
Further, when the execution of the test data fails, the test data is executed again, and the execution is repeated until the result data of the test data becomes successful, or the number of times of the repeated execution reaches a preset value. Specifically, for example, the preset value may be 5 times, and when the number of repeated executions reaches 5 times, the execution of the piece of test data is stopped.
Further, if the result data is failure, the execution data in the execution process is collected while the test data is executed. Specifically, the execution data includes an error code, a response time, popup information, and log data.
Furthermore, for test data with execution failure, if the program execution is abnormally interrupted, the test data is judged to be data with execution failure, an error reporting code and page popup information when the abnormality occurs can be captured, and character attribute contents of page newly added contents can be identified or extracted according to a page screenshot OCR; the response time is recorded for each piece of test data, including execution failure data; the log is the normal log content of each piece of test data running, including the test data that failed to execute.
As an embodiment of the present invention, executing the test data, and collecting result data of the test data during execution includes: executing the test data, and collecting detailed data and result data of the test data in the executing process; the detail data comprises screenshot data and page information.
When the test data is executed, all detail data and result data in the test strip data operation process are recorded. The detail data is all the process detail data in the operation process, and the result data is success or failure.
Further, the detail data includes screenshot data and page information, the screenshot data is obtained by screenshot of the execution page at regular time, and the page information can be obtained by character recognition of the screenshot data or by byte recognition of the execution page.
In this embodiment, as shown in fig. 2, the acquiring detailed data of the test data in the execution process includes:
step S21, performing screenshot on the test data execution page according to a preset time interval to obtain screenshot data, and performing character recognition on the screenshot data to obtain character information;
and step S22, performing node identification on the test data execution page according to a preset time interval to obtain a character attribute rule, and taking the character information and the character attribute rule as page information.
The preset time interval may be, for example, 1 second, and screenshot is performed on the test data execution page every 1 second to obtain screenshot data.
Further, the page information is information in the test data execution page, and can be obtained in two ways. Firstly, identifying nodes by a page structure, and extracting and storing the nodes including character attributes; and secondly, identifying fields or other character information in the screenshot data of the page by using an OCR technology and storing the fields or other character information.
As an embodiment of the present invention, the method further comprises: and inputting the execution data into a pre-established failure reason analysis model to obtain a failure reason corresponding to the test data.
For the data which fails to be executed, the execution data collected in the execution process of the test data is input into a failure reason analysis model which is established in advance, and therefore the probability corresponding to various failure reasons is obtained. And taking the failure reason with the highest probability as the failure reason corresponding to the test case.
In this embodiment, the execution data includes an error code, a response time, popup information, and log data.
If the program execution is abnormally interrupted for the test data which fails to be executed, judging that the test data is the data which fails to be executed, capturing error reporting codes and popup information of the page when the abnormality occurs, and recognizing or extracting character attribute content of newly added content of the page according to a page screenshot OCR (optical character recognition); the response time is recorded for each piece of test data, including execution failure data; the log is the normal log content of each piece of test data running, including the test data that failed to execute.
In this embodiment, as shown in fig. 3, the failure cause analysis model is established as follows:
step S31, acquiring historical failure data and corresponding failure reasons and execution data, and taking the historical failure data and the corresponding failure reasons and execution data as training samples;
and step S32, inputting the training samples into an initial artificial intelligence model for model training to obtain the failure reason analysis model.
The initial artificial intelligence model can adopt the existing artificial intelligence model, such as a neural network model and a convolutional neural network, the existing failure data and the failure reason data and the execution data are used as training samples, model parameters are trained by using the training samples to obtain a failure reason analysis model, and then the failure data actually generated in the test is analyzed and predicted to obtain failure reason classification.
Further, the failure cause analysis model can continuously adjust and optimize parameters according to the accumulation of data quantity. Inputting the actually occurring failure data into a failure reason analysis model for matching prediction to obtain the probability of each failure reason type, and taking the failure type with the highest probability as the execution failure reason type of the test data.
As shown in fig. 4, the system is a system work flow diagram of the exception handling method for applying the automated test in the embodiment of the present invention, in the aspect of exception handling, when any exception occurs, the system closes the application process, restarts to execute the test data, solves various exception problems in a universal manner, and improves the stability of the running of the automated test program, so that each exception does not need to be handled separately in a targeted manner, thereby improving the efficiency of exception handling.
For the aspect of case execution information recording, the process information and the result information of case execution are recorded, the case records the key information related to the case in the execution process, such as key fields, the page where the fields are located, the screenshot of the page and the like, the process information of each case execution is effectively recorded, and meanwhile, the result information of case execution is also recorded, namely the case result is success or failure; for the case with failure execution, combining the process information and the result information, automatically calculating to give abnormal reason classification, effectively tracking the reason of case result failure, and further analyzing the test problem; for the case that the recorded execution result is successful, if the program needs to be operated again, the successful case can be skipped, repeated execution is not needed, and the efficiency of the automatic test and the tracking of the failure reason are effectively improved.
Further, the system has the following working flows:
1. starting the program to run;
2. and reading the test data set, and simultaneously reading the recording module data set, wherein if a certain test data result label contains successful test data. If the result label is "success", the test data is successfully operated before, repeated operation is not needed, time is saved, operation is not needed, and if the result label of the test data is null or all "failures", the test data is executed;
3. as shown in fig. 5, after the test data set is read, the execution module extracts one of the data, configures the test data, and starts to execute the test:
3.1, if the test data is successfully operated, ending the process;
3.2, if any exception occurs in the test data operation to cause interruption, the test data is executed again from the beginning. In this loop, the test data is rerun whenever an exception occurs. If the cycle number is larger than the preset maximum repetition parameter N and is still unsuccessful, ending the operation; if the test data is successfully operated in the cycle process at a certain time, the process can be ended in the same way of 3.1;
4. recording all detail data and result data in the running process of the test data, wherein the detail data are all process detail data in the running process, and the result data are success or failure. Recording all records in a data set; meanwhile, the abnormal reasons are analyzed and classified, so that problems can be analyzed subsequently; wherein, as shown in fig. 6, the recording module includes:
(1) execution procedure detail recording subunit
And (3) screenshot is carried out in the data execution process, specifically, screenshot is carried out every second by default, the screenshot time interval can be set by self-definition according to requirements, and the screenshot time interval is stored in the execution process detail record data set corresponding to the subunit. And simultaneously, recording the fields and the content information identified in the execution process in the database of the subunit. The identification content is information in a page, and the two methods are as follows: firstly, identifying nodes by a page structure, and extracting and storing the nodes including character attributes; and secondly, identifying fields or other character information in the page screenshot by an OCR technology and storing the fields or other character information. The unique index of the two parts of data can point to the data and is used for detail data in the later quick searching and matching process.
(2) Execution result recording subunit
If the test data is not abnormal in the running process or is consistent with the set success result, the data is considered to be successfully executed, the data result label is marked as 'success', the data result label is stored in the execution result recording data set, and if the data is abnormally reported in the data execution process or is inconsistent with the set success result, the data result label is marked as 'failure', and the data result label is also stored in the execution result recording data set.
(3) Execution failure reason analysis subunit
And for failure data, multi-dimensional data such as program error reporting codes, response time, failure popup windows, log data and the like are captured and stored in an analysis data set for executing failure reasons. If the program execution is abnormally interrupted for the data which fails to be executed, judging that the test data is the data which fails to be executed, capturing error reporting codes and popup information of the page when the abnormality occurs, and recognizing or extracting character attribute content of newly added content of the page according to a page screenshot OCR (optical character recognition); the response time is recorded for each piece of test data, including execution failure data; the log is the normal log content of each piece of data operation, including data that failed execution. Inputting an existing failure reason analysis model for matching prediction to obtain the probability of each failure reason type, and taking the failure type with the highest probability as the execution failure reason type of the data.
The existing artificial intelligence model, such as a neural network model and a convolutional neural network, firstly carries out model parameter training according to the existing failure data and the failure reason data thereof, and after obtaining the model, carries out analysis and prediction on the failure data actually generated in the test to obtain the failure reason classification. Of course, the model can continuously adjust and optimize parameters according to the accumulation of data quantity.
5. And circularly and sequentially executing all other test data, and ending the program operation.
In one embodiment of the present invention, the specific process is as follows:
1. reading the data set to obtain the following data:
{ serial number: 1, merchant type: 'two-dimensional code merchant', certificate type: 'business license', certificate number: '1234' };
{ serial number: 2, merchant type: 'POS Merchant', certificate type: 'organization code', certificate number: '4321' };
{ serial number: 3, merchant type: 'small microtechner', certificate type: 'identification card', certificate number: '1234'}
Wherein:
the sequence number 1 data corresponds to a result tag in the execution results subunit of { sequence number 1: [ 'failed', 'succeeded' ] };
the result tag in the execution result subunit corresponding to the sequence number 2 data is { sequence number 2: [ 'failed', 'failed' ] };
the data of the sequence number 3 corresponds to the result label in the execution result subunit without corresponding data;
2. begin executing each piece of data in turn:
(1) first, the 1 st piece of data { sequence number: 1, merchant type: 'two-dimensional code merchant', certificate type: 'business license', certificate number: the '1234' }, the result label of the data corresponding to the execution result subunit is { serial number 1 [ 'failed', 'success' ] }, the data result label contains 'success', which indicates that the data operation is successful, and then the next piece of data is executed in sequence without repeatedly executing the test record;
(2) execute item 2 data { sequence number: 2, merchant type: 'POS Merchant', certificate type: 'organization code', certificate number: '4321', the data corresponding to the result tag in the execution result subunit is { sequence number 2: [ 'fail', 'fail' ], the data result tag is all 'fail', no successful run has been performed, then the configuration is started to execute the piece of data:
1) in the execution process, if the program is not abnormal or is matched with a set expected result, the data is considered to be successfully operated, the data result label is modified to be successful, the modified label is recorded in the execution result recording subunit, data and screenshots in the program execution process are both stored in the execution process detail data subunit, and the next piece of data can be sequentially executed after the execution is finished;
2) if the program is judged to have abnormal interruption or not to be matched with a set expected result, the data is considered to be unsuccessfully executed, the detailed data of the running process and the screenshot are recorded in the detailed data of the executing process, meanwhile, the error reporting and the abnormal condition are recorded in an execution failure reason analysis subunit, and the unit further analyzes the abnormal code and the condition and classifies the failure reasons; reconfiguring the data again and restarting the operation, if the data is successful, recording the corresponding data and then starting to execute the next data, if the data is failed, recording the corresponding data again and restarting the operation of the data, and if the failure times are equal to the preset threshold value and still are unsuccessful, skipping the data and preparing for executing the next data;
(3) execute item 3 data { sequence number: 3, merchant type: 'small microtechner', certificate type: 'identification card', certificate number: '1234' }, the test data corresponds to the result tag in the execution result subunit has no corresponding data, which indicates that the test data is not executed, and starts to configure and execute the piece of data:
and (3) executing and recording corresponding detail data and result data in the executing process and the type (2), if the detail data and the result data are successful, finishing the execution, if the abnormality occurs, recording an abnormal code or condition, storing the abnormal code or condition in an execution failure reason analysis subunit for analysis and classification, and if the failure times are equal to a preset threshold value and still unsuccessful, skipping the test data and finishing the execution.
The method and the device improve the stability when the automatic test is abnormal and the efficiency of processing the abnormal by recording various information of the test data operation, facilitate the follow-up tracing of the case failure reason, simultaneously record the test data which is successfully executed, skip the successful execution record when the program is operated next time, avoid the repeated execution and improve the operation efficiency.
Fig. 7 is a schematic structural diagram of an exception handling apparatus for automated testing according to an embodiment of the present invention, where the apparatus includes:
and the test data module 10 is configured to read the test data in the test data sets one by using the obtained test data sets, and obtain result tags corresponding to the test data.
The test data set can be stored in a database, and when the test case needs to be executed, the test data set is obtained from the database. The test data set comprises a plurality of test data, and each test data has a corresponding result label. Specifically, the test data in the test data set is read one by one, and the result tags corresponding to the test data are obtained from the database or other storage devices.
And the test execution module 20 is configured to execute the test data and acquire result data of the test data in an execution process if it is known that the result tag corresponding to the test data is failed or empty.
If the result label of the test data is successful, the test data is successfully operated before, and the test data does not need to be repeatedly operated again, so that the time is saved, and the efficiency is improved.
Further, the result tag may be success, failure, or null when the test data is first executed.
Further, if the result tag of the test data is failure, the test data is executed. Specifically, the test data whose result label is failure is configured, and after configuration, the test data is executed. In addition, if the result tag corresponding to the test data is null, it indicates that the test data has not been executed, the test data is configured and executed, and after configuration, the test data is executed.
Further, when the test data is executed, result data in the execution process is collected. In particular, the result data may be success or failure.
And a label updating module 30, configured to update the result label corresponding to the test data to be successful if the result data is successful.
If the result data is successful, it indicates that no exception occurs in the running process of the test data or the result is consistent with the set successful result, the test data is considered to be successfully executed, and the result label corresponding to the test data is updated to be successful.
Further, the updating of the result tag may be to directly modify the result tag to be successful, or the result tag may be a sequence, and the process of updating the result tag is to sequentially add the latest result in the sequence. For example, the result tag is { sequence number 1: [ 'fail', 'fail' ] }, which may be updated as { sequence number 1: [ 'failure', 'success' ] }.
And the repeated execution module 40 is configured to repeatedly execute the test data if the result data is a failure, and acquire execution data corresponding to the test data until the number of repeated execution times reaches a preset value or the result data of the test data is a success.
If the result data is failure, it indicates that an abnormal error report occurs in the test data execution process or the result is inconsistent with the set success result, and the test data execution is considered to be failed.
Further, when the execution of the test data fails, the test data is executed again, and the execution is repeated until the result data of the test data becomes successful, or the number of times of the repeated execution reaches a preset value. Specifically, for example, the preset value may be 5 times, and when the number of repeated executions reaches 5 times, the execution of the piece of test data is stopped.
Further, if the result data is failure, the execution data in the execution process is collected while the test data is executed. Specifically, the execution data includes an error code, a response time, popup information, and log data.
Furthermore, for test data with execution failure, if the program execution is abnormally interrupted, the test data is judged to be data with execution failure, an error reporting code and page popup information when the abnormality occurs can be captured, and character attribute contents of page newly added contents can be identified or extracted according to a page screenshot OCR; the response time is recorded for each piece of test data, including execution failure data; the log is the normal log content of each piece of test data running, including the test data that failed to execute.
In this embodiment, as shown in fig. 8, the apparatus further includes: and the failure reason analysis module 50 is configured to input the execution data into a failure reason analysis model established in advance, so as to obtain a failure reason corresponding to the test data.
For the data which fails to be executed, the execution data collected in the execution process of the test data is input into a failure reason analysis model which is established in advance, and therefore the probability corresponding to various failure reasons is obtained. And taking the failure reason with the highest probability as the failure reason corresponding to the test case.
Based on the same application concept as the exception handling method for the automatic test, the invention also provides the exception handling device for the automatic test. Because the principle of solving the problems of the automatic test exception handling device is similar to the automatic test exception handling method, the implementation of the automatic test exception handling device can refer to the implementation of the automatic test exception handling method, and repeated parts are not described again.
The method and the device improve the stability when the automatic test is abnormal and the efficiency of processing the abnormal by recording various information of the test data operation, facilitate the follow-up tracing of the case failure reason, simultaneously record the test data which is successfully executed, skip the successful execution record when the program is operated next time, avoid the repeated execution and improve the operation efficiency.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method when executing the program.
The present invention also provides a computer-readable storage medium storing a computer program for executing the above method.
As shown in fig. 9, the electronic device 600 may further include: communication module 110, input unit 120, audio processing unit 130, display 160, power supply 170. It is noted that the electronic device 600 does not necessarily include all of the components shown in FIG. 9; furthermore, the electronic device 600 may also comprise components not shown in fig. 9, which may be referred to in the prior art.
As shown in fig. 9, the central processor 100, sometimes referred to as a controller or operational control, may include a microprocessor or other processor device and/or logic device, the central processor 100 receiving input and controlling the operation of the various components of the electronic device 600.
The memory 140 may be, for example, one or more of a buffer, a flash memory, a hard drive, a removable media, a volatile memory, a non-volatile memory, or other suitable device. The information relating to the failure may be stored, and a program for executing the information may be stored. And the central processing unit 100 may execute the program stored in the memory 140 to realize information storage or processing, etc.
The input unit 120 provides input to the cpu 100. The input unit 120 is, for example, a key or a touch input device. The power supply 170 is used to provide power to the electronic device 600. The display 160 is used to display an object to be displayed, such as an image or a character. The display may be, for example, an LCD display, but is not limited thereto.
The memory 140 may be a solid state memory such as Read Only Memory (ROM), Random Access Memory (RAM), a SIM card, or the like. There may also be a memory that holds information even when power is off, can be selectively erased, and is provided with more data, an example of which is sometimes called an EPROM or the like. The memory 140 may also be some other type of device. Memory 140 includes buffer memory 141 (sometimes referred to as a buffer). The memory 140 may include an application/function storage section 142, and the application/function storage section 142 is used to store application programs and function programs or a flow for executing the operation of the electronic device 600 by the central processing unit 100.
The memory 140 may also include a data store 143, the data store 143 for storing data, such as contacts, digital data, pictures, sounds, and/or any other data used by the electronic device. The driver storage portion 144 of the memory 140 may include various drivers of the electronic device for communication functions and/or for performing other functions of the electronic device (e.g., messaging application, address book application, etc.).
The communication module 110 is a transmitter/receiver 110 that transmits and receives signals via an antenna 111. The communication module (transmitter/receiver) 110 is coupled to the central processor 100 to provide an input signal and receive an output signal, which may be the same as in the case of a conventional mobile communication terminal.
Based on different communication technologies, a plurality of communication modules 110, such as a cellular network module, a bluetooth module, and/or a wireless local area network module, may be provided in the same electronic device. The communication module (transmitter/receiver) 110 is also coupled to a speaker 131 and a microphone 132 via an audio processor 130 to provide audio output via the speaker 131 and receive audio input from the microphone 132 to implement general telecommunications functions. Audio processor 130 may include any suitable buffers, decoders, amplifiers and so forth. In addition, an audio processor 130 is also coupled to the central processor 100, so that recording on the local can be enabled through a microphone 132, and so that sound stored on the local can be played through a speaker 131.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principle and the implementation mode of the invention are explained by applying specific embodiments in the invention, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. An exception handling method for automated testing, the method comprising:
reading the test data in the test data set one by using the obtained test data set, and obtaining a result label corresponding to each test data;
if the result label corresponding to the test data is found to be failed or empty, executing the test data, and collecting the result data of the test data in the executing process;
if the result data is successful, updating a result label corresponding to the test data to be successful;
and if the result data is failure, repeatedly executing the test data, and collecting execution data corresponding to the test data until the repeated execution times reach a preset value or the result data of the test data is success.
2. The method of claim 1, wherein the executing the test data and collecting result data of the test data during execution comprises:
executing the test data, and collecting detailed data and result data of the test data in the executing process; the detail data comprises screenshot data and page information.
3. The method of claim 2, wherein the collecting detailed data of the test data during execution comprises:
screenshot is carried out on a test data execution page according to a preset time interval to obtain screenshot data, and character recognition is carried out on the screenshot data to obtain character information;
and performing node identification on the test data execution page according to a preset time interval to obtain a character attribute rule, and taking the character information and the character attribute rule as page information.
4. The method of claim 1, further comprising: and inputting the execution data into a pre-established failure reason analysis model to obtain a failure reason corresponding to the test data.
5. The method of claim 4, wherein the execution data comprises error codes, response times, popup information, and log data.
6. The method of claim 4, wherein the failure cause analysis model is established by:
acquiring historical failure data and failure reasons and execution data corresponding to the historical failure data, and taking the historical failure data and the failure reasons and the execution data corresponding to the historical failure data as training samples;
and inputting the training samples into an initial artificial intelligence model for model training to obtain the failure reason analysis model.
7. An exception handling apparatus for automated testing, the apparatus comprising:
the test data module is used for reading the test data in the test data set one by using the obtained test data set and obtaining a result label corresponding to each test data;
the test execution module is used for executing the test data and collecting result data of the test data in the execution process if the result tag corresponding to the test data is failed or empty;
the label updating module is used for updating the result label corresponding to the test data to be successful if the result data is successful;
and the repeated execution module is used for repeatedly executing the test data if the result data is failed, and acquiring execution data corresponding to the test data until the repeated execution times reach a preset value or the result data of the test data is successful.
8. The apparatus of claim 7, further comprising: and the failure reason analysis module is used for inputting the execution data into a pre-established failure reason analysis model to obtain a failure reason corresponding to the test data.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for executing the method of any one of claims 1 to 6.
CN202110685533.8A 2021-06-21 2021-06-21 Exception handling method and device for automatic test Pending CN113392009A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110685533.8A CN113392009A (en) 2021-06-21 2021-06-21 Exception handling method and device for automatic test

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110685533.8A CN113392009A (en) 2021-06-21 2021-06-21 Exception handling method and device for automatic test

Publications (1)

Publication Number Publication Date
CN113392009A true CN113392009A (en) 2021-09-14

Family

ID=77623183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110685533.8A Pending CN113392009A (en) 2021-06-21 2021-06-21 Exception handling method and device for automatic test

Country Status (1)

Country Link
CN (1) CN113392009A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116340191A (en) * 2023-05-31 2023-06-27 合肥康芯威存储技术有限公司 Method, device, equipment and medium for testing memory firmware

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9514037B1 (en) * 2015-12-16 2016-12-06 International Business Machines Corporation Test program scheduling based on analysis of test data sets
CN109871315A (en) * 2019-01-03 2019-06-11 平安科技(深圳)有限公司 The diagnostic method and device of system upgrade failure based on machine learning
CN110347590A (en) * 2019-06-18 2019-10-18 平安普惠企业管理有限公司 The interface testing control method and device of operation system
CN112540916A (en) * 2020-11-30 2021-03-23 的卢技术有限公司 Automatic rerun method and device for failed case, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9514037B1 (en) * 2015-12-16 2016-12-06 International Business Machines Corporation Test program scheduling based on analysis of test data sets
CN109871315A (en) * 2019-01-03 2019-06-11 平安科技(深圳)有限公司 The diagnostic method and device of system upgrade failure based on machine learning
CN110347590A (en) * 2019-06-18 2019-10-18 平安普惠企业管理有限公司 The interface testing control method and device of operation system
CN112540916A (en) * 2020-11-30 2021-03-23 的卢技术有限公司 Automatic rerun method and device for failed case, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116340191A (en) * 2023-05-31 2023-06-27 合肥康芯威存储技术有限公司 Method, device, equipment and medium for testing memory firmware
CN116340191B (en) * 2023-05-31 2023-08-08 合肥康芯威存储技术有限公司 Method, device, equipment and medium for testing memory firmware

Similar Documents

Publication Publication Date Title
CN108845930B (en) Interface operation test method and device, storage medium and electronic device
US10176079B2 (en) Identification of elements of currently-executing component script
CN110347085B (en) Automated test system, method, vehicle, and computer-readable medium
CN111552633A (en) Interface abnormal call testing method and device, computer equipment and storage medium
CN111949522B (en) Automatic testing method and device for user interface
CN108446224B (en) Performance analysis method of application program on mobile terminal and storage medium
CN113377484A (en) Popup window processing method and device
CN108255735B (en) Associated environment testing method, electronic device and computer readable storage medium
CN115422065A (en) Fault positioning method and device based on code coverage rate
CN113392009A (en) Exception handling method and device for automatic test
CN113760611B (en) System site switching method and device, electronic equipment and storage medium
CN114020432A (en) Task exception handling method and device and task exception handling system
CN111143191A (en) Website testing method and device, computer equipment and storage medium
US20200241900A1 (en) Automation tool
CN107908525B (en) Alarm processing method, equipment and readable storage medium
CN111538542B (en) System configuration method and related device
CN116244202A (en) Automatic performance test method and device
CN110716778A (en) Application compatibility testing method, device and system
CN113628077B (en) Method, terminal and readable storage medium for generating non-repeated questions
CN113128986A (en) Error reporting processing method and device for long-link transaction
CN113986759A (en) Business processing flow acquisition method, business architecture flow verification method and system
CN112527631A (en) bug positioning method, system, electronic equipment and storage medium
CN110704483A (en) User routing process positioning method, device, storage medium and device
CN110769248A (en) Set top box testing method and device and electronic equipment
CN111522737B (en) Automatic test verification method and device for front-end interface and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination