CN117056221A - Automatic testing method and device for application program, computer equipment and storage medium - Google Patents

Automatic testing method and device for application program, computer equipment and storage medium Download PDF

Info

Publication number
CN117056221A
CN117056221A CN202311031410.8A CN202311031410A CN117056221A CN 117056221 A CN117056221 A CN 117056221A CN 202311031410 A CN202311031410 A CN 202311031410A CN 117056221 A CN117056221 A CN 117056221A
Authority
CN
China
Prior art keywords
test
library
parameters
application program
script
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311031410.8A
Other languages
Chinese (zh)
Inventor
张业
张松源
魏桂芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan Yilian Information System Co ltd
Original Assignee
Dongguan Yilian Information System Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan Yilian Information System Co ltd filed Critical Dongguan Yilian Information System Co ltd
Priority to CN202311031410.8A priority Critical patent/CN117056221A/en
Publication of CN117056221A publication Critical patent/CN117056221A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3664Environments for testing or debugging software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the application discloses an automatic test method for an application program, which comprises the steps of obtaining test expected information of a target application program, configuring each parameter in the test expected information in a preset test command parameter library, obtaining test environment parameters and configuring the test environment parameters in the preset test environment parameter library; establishing an operation library, generating test scripts corresponding to the test expected information and the test environment parameters, and configuring the test scripts in the operation library; establishing an automatic test model according to the test command parameter library, the environment information parameter library and the operation library; matching and connecting the automatic test model with a target application program; and acquiring the running state parameters of the target application program, configuring the running state parameters in the running library, generating a corresponding running script, and comparing and verifying the running script and the test script to generate evaluation result information corresponding to a comparison and verification result. By implementing the method provided by the embodiment of the application, the test efficiency of the application starting time can be improved, and the quick and extensible application starting time test can be realized.

Description

Automatic testing method and device for application program, computer equipment and storage medium
Technical Field
The present application relates to the field of intelligent device testing technologies, and in particular, to an automatic testing method and apparatus for an application program, a computer device, and a storage medium.
Background
UI automation testing is a method of verifying application functionality and performance by simulating user interaction behavior. Currently, there are many UI automation test frameworks available, such as Appium, espresso, UI Automator, etc. These frameworks can enable cross-platform testing, supporting automated testing on different devices and operating systems. The UI automation testing tool provides rich APIs and functions, and can perform UI element positioning, simulation operation, verification assertion and the like so as to realize comprehensive automation testing coverage. Automated build tools are tools for managing application build processes, such as Gradle, maven, etc. These tools can specify dependencies, compilation parameters, etc. through configuration files, thereby enabling automatic download, compilation, and deployment of applications. Automated build tools can be integrated into a CI/CD (continuous integration/continuous delivery) pipeline, enabling the overall process of automated build, test, and deployment. In automated testing, script writing is one of the key links. Currently, there are a variety of scripting techniques and tools available. The Python language is often used for writing an automatic test script, and the uiautomatic 2 library provides the functions of communication and control with the Android device, so that UI automatic test can be conveniently realized. There are other programming languages and tools available, such as Java, kotlen, robot Framework, etc. However, the existing application start-up time test method has the defects of high labor cost, high time cost, low test efficiency, poor expandability and the like. By adopting the technical schemes of automatic test, parallel test, batch processing and the like, the test efficiency is improved, the cost is reduced, and the quick and extensible application starting time test is realized.
Disclosure of Invention
The embodiment of the application provides an automatic test method, an automatic test device, computer equipment and a storage medium for an application program, wherein the method adopts the technical means of automatic test, parallel test, batch processing and the like so as to improve the test efficiency, reduce the cost and realize quick and extensible application starting time test.
In a first aspect, an embodiment of the present application provides an application automation test method, including: acquiring test expected information of the target application program, and configuring various parameters in the test expected information in a preset test command parameter library, wherein the test expected information comprises an expected cold start time length number and an expected hot start time length number; acquiring test environment parameters, and configuring the test environment parameters in a preset test environment parameter library; establishing an operation library based on the test command parameter library and the test environment parameter library, generating test scripts corresponding to the test expected information and the test environment parameters, and configuring the test scripts in the operation library; establishing an automatic test model according to the test command parameter library, the environment information parameter library and the operation library; matching and connecting the automatic test model with the target application program; acquiring running state parameters of the target application program, configuring the running state parameters in the running library and generating corresponding running scripts, wherein the running state parameters comprise the actual cold start time length number and the actual hot start time length number; and comparing and verifying the running script with the test script to generate evaluation result information corresponding to a comparison and verification result.
In a second aspect, an embodiment of the present application further provides an application automation test device, including:
in a third aspect, an embodiment of the present application further provides a computer device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the method when executing the computer program.
In a fourth aspect, embodiments of the present application also provide a computer readable storage medium storing a computer program comprising program instructions which, when executed by a processor, implement the above-described method.
The embodiment of the application provides an application program automatic testing method, an application program automatic testing device, computer equipment and a storage medium. Wherein the method comprises the following steps: acquiring test expected information of a target application program, and configuring various parameters in the test expected information in a preset test command parameter library, wherein the test expected information comprises an expected cold start time length and an expected hot start time length; acquiring test environment parameters, and configuring the test environment parameters in a preset test environment parameter library; establishing an operation library based on the test command parameter library and the test environment parameter library, and generating test script configuration corresponding to the test expected information and the test environment parameters in the operation library; establishing an automatic test model according to the test command parameter library, the environment information parameter library and the operation library; matching and connecting the automatic test model with a target application program; acquiring running state parameters of a target application program, configuring the running state parameters in a running library and generating a corresponding running script, wherein the running state parameters comprise the actual cold start time length and the actual hot start time length; and comparing and verifying the running script and the test script to generate evaluation result information corresponding to the comparison and verification result. According to the embodiment of the application, the automatic testing tool is used, a batch processing script or tool is developed by utilizing the parallel testing capability, and the distributed testing framework is adopted, so that the effects of improving the testing efficiency, reducing the cost and realizing the quick and extensible application starting time test can be realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application scenario schematic diagram of an automatic testing method for an application program according to an embodiment of the present application;
FIG. 2 is a flowchart of an automatic test method for application programs according to an embodiment of the present application;
FIG. 3 is a schematic sub-flowchart of an automated testing method for an application program according to an embodiment of the present application;
FIG. 4 is a schematic diagram of another sub-flowchart of an automated testing method for applications according to an embodiment of the present application;
FIG. 5 is a schematic diagram of another sub-flowchart of an automated testing method for applications according to an embodiment of the present application;
FIG. 6 is a schematic diagram of another sub-flowchart of an automatic testing method for application programs according to an embodiment of the present application;
FIG. 7 is a schematic diagram of another sub-flowchart of an automated testing method for applications according to an embodiment of the present application;
FIG. 8 is a schematic diagram of another sub-flowchart of an automated testing method for applications according to an embodiment of the present application;
FIG. 9 is a schematic block diagram of an automated application testing apparatus provided by an embodiment of the present application;
fig. 10 is a schematic block diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
The embodiment of the application provides an automatic test method and device for an application program, computer equipment and a storage medium.
The execution main body of the automatic test method of the application program can be the automatic test device of the application program provided by the embodiment of the application or computer equipment integrated with the automatic test device of the application program, wherein the automatic test device of the application program can be realized in a hardware or software mode, the computer equipment can be a terminal or a server, and the terminal can be a smart phone, a tablet personal computer, a palm personal computer, a notebook personal computer or the like.
Referring to fig. 1, fig. 1 is a schematic diagram of an application scenario of an automatic testing method for an application program according to an embodiment of the present application. The application automated test method is applied to the computer device 500 in fig. 10.
Fig. 2 is a flowchart of an automatic testing method for an application program according to an embodiment of the present application. As shown in fig. 2, the method includes the following steps S110-170.
S110, acquiring test expected information of a target application program, and configuring various parameters in the test expected information in a preset test command parameter library, wherein the test expected information comprises an expected cold start time length number and an expected hot start time length number.
Specifically, first, test expected information is obtained from specification documents, demand specifications, or other related data of a target application program. Such information may include parameters such as expected cold start duration and hot start duration. And configuring each parameter in the acquired test expected information in a preset test command parameter library. Corresponding fields or variables may be set in the test command parameter library to store these parameters according to specific requirements. For example, the following fields may be set in the test command parameter library: cold_start_duration: expected cold start duration, hot_start_duration: expected hot start duration. The parameters configured in the test command parameter library may be used in subsequent test script generation and execution. And acquiring the expected cold start duration and hot start duration by reading field values in the test command parameter library in the test script, and performing corresponding test operation. These parameters may be passed to an application's launch command or test tool to monitor whether the actual launch duration meets expectations. By configuring the acquired test expectation information in the test command parameter library, the parameters can be conveniently used in an automatic test flow, and the test and evaluation of the starting time length can be performed.
S120, acquiring test environment parameters, and configuring the test environment parameters in a preset test environment parameter library.
Specifically, the required test environment parameters are determined: first, the test environment parameters that need to be used in the test procedure are determined. These parameters may include device information, network conditions, system version, external resource dependencies, etc. Obtaining test environment parameters: and acquiring corresponding test environment parameters according to the test requirements and the characteristics of the target application program. For example, some common test environment parameters are obtained by: and acquiring information such as the name, the model, the version of an operating system and the like of the equipment by using an ADB tool. And acquiring parameters such as network type, bandwidth, delay and the like according to the current network connection state. And acquiring the version information of the currently running system through a system command or API. External resources such as databases, interface services and the like on which the target application program depends are determined, and corresponding parameters such as addresses, ports, user names, passwords and the like are acquired. And configuring the obtained test environment parameters into a preset test environment parameter library. Corresponding fields or variables may be set in the test environment parameter library to store these parameters according to specific requirements. For example, the following fields may be set in the test environment parameter library: device_name: device name, os_version: operating system version, network_type: network type, db_host: database host address, db_port: database port number, db_username: database user name, db_password: database password, parameters configured in the test environment parameter library may be used in the subsequent test script generation and execution process. The device information, the network condition, the system version and the like can be obtained by reading field values in a test environment parameter library in the test script, and corresponding test operation is carried out according to the parameters. The acquired test environment parameters are configured into a test environment parameter library, so that the parameters can be conveniently used in an automatic test flow, and a correct environment is provided for a test process. This helps to ensure the reliability of the test results and provides accurate test environment information for the developer to conduct troubleshooting and optimization efforts.
S130, establishing a running base based on the test command parameter base and the test environment parameter base, and generating test script configuration corresponding to the test expected information and the test environment parameters in the running base.
Specifically, an operation library based on a test command parameter library and a test environment parameter library is established, a test script corresponding to test expected information and test environment parameters is generated and configured in the operation library, and an operation library suitable for storing and managing the test command parameters and the test environment parameters is created. Alternatively, files, databases, or other data storage means may be used. And adding parameters in the test command parameter library into the operation library. And writing required test command parameters into corresponding fields or variables in the operation library according to the test script and the test expected information, so as to ensure that all necessary parameters are correctly configured. And adding parameters in the test environment parameter library into the operation library. And writing the required test environment parameters into corresponding fields or variables in the operation library according to the test script and the test environment requirements, so as to ensure that the environment parameters required by the test are correctly configured. And generating test script configuration corresponding to the test expected information and the test environment parameters according to the test command parameters and the test environment parameters in the operation library. Depending on the specific requirements and test framework used, a programming language (e.g., python) may be used to generate the configuration file, or a template engine (e.g., jinja 2) may be used to generate the dynamic test script configuration file. Adding the test script configuration to the runtime: and adding the generated test script configuration into a running library, and storing the test script configuration together with the test command parameters and the test environment parameters.
And S140, establishing an automatic test model according to the test command parameter library, the environment information parameter library and the operation library.
Specifically, an automatic test model is established according to a test command parameter library, an environment information parameter library and an operation library, and a group of test case sets are defined according to test requirements and targets. Each test case should include test command parameters, expected results, and related environmental information parameters. In the automated test model, corresponding test command parameters and environment information parameters are obtained from the runtime. The data in the runtime library may be read by a programming language (e.g., python) and stored in a suitable data structure, such as a list or dictionary. According to each test case, a test script generator is created, and the generator is responsible for generating test scripts corresponding to the test command parameters and the environment information parameters. The test script may be generated using a template engine or other means to insert the test command parameters and the environmental information parameters into the corresponding locations. And executing the automatic test by calling the test script generated by the test script generator. An automated test framework (e.g., selenium, appium, etc.) or other suitable test tool may be used to run the test scripts. After the test is completed, checking whether the actual result is consistent with the expected result. And recording the passing or failing of the test according to the test result, and storing the test result information. And executing each test case in a circulating way according to the defined test case set. In each cycle, the test command parameters and the environment information parameters are updated, and a test script generator is used for generating a corresponding test script to perform automatic test. And recording an execution result of each test case. Based on all test results, result analysis is performed to generate test reports and provide detailed information about test execution, such as the number of test cases passed, the number of test cases failed, error logs, etc.
And S150, carrying out matching connection on the automatic test model and the target application program.
Specifically, to match the automated test model to the target application, the following steps may be performed: an explicit target application, such as a web application, mobile application, or desktop application, etc. According to the type of the target application program and the test requirement, a proper automatic test tool is selected. For example, for a web application, selenium or Cypress may be selected; for mobile applications, either app or UI Automator may be selected; for desktop applications, sikuliX or WinAppDriver may be selected. These tools all provide rich APIs and functionality to support automated testing. The selected test tools are integrated into an automated test model. An official document or example code of the tool may be used to learn how to configure and use the test tool. And writing a test script, and operating the target application program by using the API and the functions provided by the test tool according to the test requirements and the functional characteristics of the target application program. In the test script, a test command parameter library and an environment information parameter library may be used to obtain corresponding test parameters. The test script is executed using the selected test tool. The test tool will launch the target application and perform an automated test according to the operational steps defined in the test script. During test execution, the behavior and results of the target application may be monitored and verified. Operation and interaction in the test script are ensured to be consistent with expectations according to expected results and assertion mechanisms.
S160, acquiring running state parameters of the target application program, configuring the running state parameters in a running library, and generating corresponding running scripts, wherein the running state parameters comprise the actual cold start time length number and the actual hot start time length number.
Specifically, to obtain the running state parameters of the target application program, configure the running state parameters in the running library, generate a corresponding running script, and collect the actual cold start duration and hot start duration data by using a proper tool or method when the target application program is run. The data may be obtained by means of code timing, logging, or third party performance monitoring tools, etc. And adding the collected cold start duration and hot start duration data into a running library. Databases, text files, or other suitable data storage may be used to store such data. Each application is assured of a corresponding unique identifier for subsequent matching and use. Based on the data in the runtime, a runtime script generator is created that is responsible for generating a runtime script that corresponds to the runtime state parameters of the target application. The run script may be generated using a template engine or other means to insert the cold start duration and hot start duration parameters into the corresponding locations. And executing the running of the target application program by calling the running script generated by the running script generator. The running script simulates the actual start-up procedure using the configured cold start-up duration and hot start-up duration parameters. During operation, the start-up duration of the target application may be monitored and verified. And comparing the actual result with the expected result to ensure that the simulated cold start duration and the simulated hot start duration are consistent with the actual result.
S170, comparing and verifying the running script and the test script to generate evaluation result information corresponding to the comparison and verification result.
Specifically, to perform contrast verification on the running script and the test script, generate evaluation result information corresponding to a contrast verification result, execute the test script by using a test tool, and perform an automated test on the target application program. The test script contains expected operation steps and assertion mechanisms for verifying the behavior and results of the target application. The running script is used to simulate the running process of the target application. And simulating the starting process of the application program according to the cold starting time length and the hot starting time length parameters defined in the operation script. During the running of the test script and the running of the script, the behavior and result data of the application program are monitored and recorded. And judging whether the execution of the test script and the running script meets the expectations or not according to the expected result and the assertion mechanism. And comparing and verifying the execution result of the test script with the execution result of the running script. The difference between the two is compared to check if there is any inconsistency or error. And generating corresponding evaluation result information according to the comparison verification result. Including the execution status of the test script and the running script, execution time, number of test cases passed/failed, error information, etc. This information may be recorded in a test report for later analysis and improvement.
In summary, test expected information, including parameters such as expected cold start duration and hot start duration, is obtained from specification documents or other relevant data of the target application. And configuring each parameter in the test expected information in a preset test command parameter library. These parameters may be used in subsequent test script generation and execution processes. Knowing the hardware equipment, operating system environment and other related tools required for testing, and configuring the parameters in a preset testing environment parameter library. And establishing an operation library based on the test command parameter library and the test environment parameter library. And then generating test scripts corresponding to the test expected information and the test environment parameters according to the test expected information and the test environment parameters, and configuring the scripts in the operation library. And establishing an automatic test model according to the test command parameter library, the environment information parameter library and the operation library. The model can be used for managing and executing an automatic test flow, and ensuring the consistency and reproducibility of the test. And according to the configuration in the automatic test model, the automatic test model is matched and connected with the target application program, so that communication and interaction with the target application program are ensured. And acquiring real-time running state parameters including actual cold start duration, hot start duration and the like through communication with a target application program. And configuring the acquired running state parameters in a running library to generate a running script corresponding to the running state parameters. And then comparing and verifying the running scripts with the test scripts generated before so as to ensure the correctness and consistency of the test. And generating evaluation result information corresponding to the comparison verification result according to the comparison verification result. This information can be used to evaluate whether the performance and functionality of the target application is satisfactory and to assist in further improvements and optimizations. The technical scheme provides an automatic test flow, which can improve the test efficiency and accuracy, reduce the workload of manual test, provide reliable evaluation result information and help a developer to better know and improve the performance and the function of a target application program. Meanwhile, the scheme also relates to the implementation of the computer equipment and the storage medium so as to support the execution of the automatic test. By creating a test script (using pytest/unittest), cold and hot start tests are performed on the installed target application in conjunction with the adb command. The automatic test flow can reduce the workload of manual testing and improve the efficiency of testing. In the scheme, a preset test command parameter library and a test environment parameter library are used for configuring and acquiring various parameters required by the test. By configuring the test expected information and the test environment parameters in the library, the test parameters can be flexibly managed and adjusted to meet the test requirements in different scenes. And acquiring the starting time consumption of the target application program by executing cold start and hot start tests. The time consumption of starting is an important index for measuring the application experience, and shorter time consumption of starting means faster starting speed of an application program and better user experience. By comparing the actual start-up time consumption with the expected start-up time consumption, evaluation result information can be generated. This information can be used to evaluate whether the performance and functionality of the target application is expected and to help the developer locate and resolve the start-up performance issue. The application starting automatic test is realized, and the test efficiency can be improved. By automating the test flow and starting the time-consuming evaluation, the developer can better understand the performance of the target application and optimize it if necessary, thereby providing a better user experience.
Further, as shown in fig. 3, step S110 further includes performing steps S111-S114:
s111, generating a plurality of test cases based on the expected cold start duration number shown in the test expected information.
Specifically, generating a plurality of test cases according to the expected number of cold start durations shown in the test expected information may be performed according to the following steps: and determining the range and the number of the generated test cases according to the expected cold start time length given in the test expected information. For example, if the expected cold start time is a range (e.g., 1-2 seconds), multiple test cases may be generated to cover different cases within the range. And designing a specific test case according to the test case range. The test cases should include key elements such as input data, operational steps, and expected results. The following aspects can be considered: when the test case is generated, different cold start time length numbers can be set as input data. Based on the expected results, the expected application behavior or output results are determined. Boundary value case: when designing test cases, consider the boundary value case. For example, at the boundary of a range of long numbers at which a cold start is expected, some special values are selected for testing and the response of the application is observed. The test cases are executed using automated test tools or manually. And running the target application program according to the cold start time length and the expected result defined in each test case, and observing the actual start time length and the actual result. And recording the actual cold start time length and the actual result of each test case. The execution status (pass/fail) of the test case is determined in comparison to the expected result. Multiple test cases may be generated based on the number of expected cold start durations in the test expected information and verify that the actual cold start durations and results are consistent with the expectations.
S112, arranging the test cases according to the priority order.
Specifically, sequential arrangement by test case priority facilitates first executing the most critical and valuable test cases under limited time and resources. The following is a general procedure for ordering multiple test cases by priority: the criteria for test case priority are determined. This may be based on factors such as the needs of the application, the importance of the function, the criticality of the business process, etc. Test cases may be classified into high, medium, low, etc. priorities. And evaluating the importance and the value of each test case according to the target and the expected result of the test case. Consider the following factors: test cases covering core functions or key function points are preferentially selected. Concern is given about test cases that may be present with serious errors or potential risks. According to the business requirement of the application program, the test cases closely related to the business process are preferentially selected. Each test case is assigned a priority label. For example, a high, medium, low, or number (e.g., 1, 2, 3) may be used to represent priority. And sequencing the test cases according to the priority labels. The test case with the highest priority is placed at the forefront of the list, next to the medium priority test case, and finally to the low priority test case. The prioritization of test cases is reviewed and validated with team members. Ensuring that all agree on the priority order of the test cases and are ready to perform the test.
S113, respectively generating corresponding test command parameter libraries according to the sequencing results of the test cases.
Specifically, the corresponding test command parameter library generated according to the sequencing result of the test cases can more effectively execute the test, and ensure that important test cases are covered according to the priority order. First, the structure and fields of the test command parameter library are determined. Depending on the test tool used and the requirements. For example, parameter fields such as input data, operation steps, expected results, and the like may be defined for each test case. And according to the sequencing result of the test cases, creating the entries of the test command parameter library one by one. For each test case, the relevant test parameters are recorded in a parameter library. The test cases are added according to the priority order, so that the test can be performed according to the priority order when the test is executed later. Corresponding parameter values are added for each field, including input data, operation steps, etc. These parameter values should be consistent with what is defined in the test case. If the test environment needs to be pre-processed or configured, relevant fields and parameter values may be added to the parameter library to ensure that the necessary preparation is performed before the test is performed. And (5) sorting and verifying the generated test command parameter library. Each test case is ensured to have corresponding parameter records and arranged according to the correct priority order.
S114, configuring the expected cold start duration number corresponding to each test case in the test command parameter library corresponding to the test case.
Specifically, the expected cold start time length corresponding to each test case is configured in the test command parameter library corresponding to the test case, so that performance indexes in the test process can be more comprehensively recorded and managed. In the test command parameter library, a field for the expected cold start duration is added for each test case. A field or fields may be used depending on the requirements. For each test case, corresponding values are filled in according to the expected cold start duration. Units of seconds, milliseconds, etc. may be used according to actual requirements. In addition to the expected cold start duration field, other fields in the parameter library (e.g., input data, operational steps, etc.) should be ensured to be consistent with the definition and requirements of the test case. And (3) sorting and verifying the generated test command parameter library to ensure that each test case contains the value of the expected cold start duration and is arranged according to the correct priority order. When the test is performed, the expected cold start duration field in the parameter library can be used as a reference to be compared with the actual performance result. This allows a better assessment of the performance of the system and a timely identification of any unexpected situations.
Specifically, as shown in fig. 4, step S120 further includes performing steps S121 to S125:
s121, acquiring and classifying the test environment information to obtain a plurality of test environment parameters expected based on the test environment.
Specifically, to obtain and classify the test environment information and obtain the test environment parameters expected based on the test environment, various information related to the test environment needs to be obtained, including hardware configuration, operating system version, network connection speed, database version, and the like. Such information may be obtained by way of system commands, configuration files, runtime parameters, and the like. And classifying the collected test environment information according to the requirements and the test targets. For example, hardware configuration information, operating system related information, network environment information, etc. may be classified into different categories. Corresponding test environment parameters are determined based on the expected requirements of each test environment class. For example, if a test case requires execution under a particular operating system version, the operating system version may be considered one of the test environment parameters. And sorting and recording the determined test environment parameters in a test command parameter library or configuration file. Ensuring that each test environment parameter is consistent with the corresponding test environment expectations. The recorded test environment parameters are used to configure and adjust the test environment while the test is being performed. Ensuring that the test environment is expected to properly execute the test cases.
S122, generating a plurality of corresponding environment assessment cases based on a plurality of combination relations of the plurality of test environment parameters.
Specifically, a plurality of corresponding environment assessment cases are to be generated based on a plurality of combination relations of a plurality of test environment parameters: and determining a plurality of test environment parameters participating in the combination relation according to the requirements and the test targets. Such as operating system version, database version, network connection speed, etc. The range of possible values is enumerated for each test environment parameter. For example, the operating system version may include Windows, linux, mac, etc., and the database version may include MySQL, oracle, postgreSQL, etc. And combining the value ranges of the plurality of test environment parameters by using a combination algorithm or script to generate various possible combination conditions. Each parameter is ensured to be combined with other parameters, and corresponding environment assessment cases are generated. The expected evaluation results are added for each generated environmental evaluation case. This may be a desired performance index, functional compatibility, etc. And verifying and screening the generated environment evaluation cases to ensure the rationality and effectiveness of the environment evaluation cases. Use cases for removing duplicates, excluding infeasible combinations, etc. may be considered. And (3) sorting and recording the verified and screened environment evaluation cases in a test case library, wherein the environment evaluation cases comprise the value combination of the test environment parameters and the expected evaluation result.
S123, arranging the plurality of environment evaluation cases according to the priority order and obtaining corresponding serial numbers.
Specifically, to sequentially arrange the plurality of environment assessment cases according to the priority and obtain the corresponding serial numbers, a priority is determined for each environment assessment case, and the priority may be a high, medium, low or the like level, or a number is used for representing, for example, 1, 2, 3 or the like. Each environmental assessment use case is assigned an initial sequence number, which may be assigned in the order they were in the original list, e.g., use case 1 numbered 1, use case 2 numbered 2, and so on. The use cases are reordered according to the priority of evaluating the use cases. The use cases with high priority are ranked in front, followed by the use cases with medium and low priorities. Ensuring that the order of the initial sequence numbers is still maintained at the same priority level. And updating the sequence number of each use case according to the reordered use case sequence. In a new order, use case 1 may be numbered 3, use case 2 may be numbered 1, and so on. The reordered cases and their sequence numbers are recorded, and they can be stored in a test case library or in a separate document, so that the subsequent test execution and tracking are facilitated.
S124, respectively generating corresponding test environment parameter libraries according to the sequencing results of the environment evaluation cases.
Specifically, according to the sequencing result of the environment evaluation cases, the sequencing result of each environment evaluation case is checked in turn, and the test environment parameters are obtained. A new library of test environment parameters is created, which may be a table or document, for storing the test environment parameters. The test environment parameters involved in each use case are added to a test environment parameter library. Ensuring that each parameter has a corresponding field or column, and recording in a test environment parameter library. In adding the test environment parameters, it is ensured that duplicate parameters are avoided. If the same test environment parameters are used in a plurality of use cases, the test environment parameters are added only once. For each test environment parameter, the possible value range can be additionally recorded. This facilitates better configuration and setup of the test environment for subsequent test execution. In the test environment parameter library, information of the use case, such as the name, priority, etc., may be attached to correlate with the test environment parameters. This helps to understand the context and use of each parameter. And (3) periodically maintaining and updating a test environment parameter library, ensuring that the information in the test environment parameter library is consistent with the actual test environment, and adding new test environment parameters according to the requirement.
S125, configuring the test environment parameters corresponding to the environment evaluation cases in a test environment parameter library corresponding to the environment evaluation cases.
Specifically, in the test environment parameter library, a corresponding field or column is found, and the value of the parameter is filled in. If the range of the parameter values is recorded in the test environment parameter library, the configured parameter values are ensured to be within the range. And carrying out the steps on each environment evaluation case, and configuring the corresponding test environment parameters to the corresponding positions. In the test environment parameter library, information of the use case, such as the name, priority, etc., may be attached to correlate with the test environment parameters. And (3) periodically maintaining and updating a test environment parameter library, ensuring that the information in the test environment parameter library is consistent with the actual test environment, and adding new test environment parameters according to the requirement.
Specifically, as shown in fig. 5, step S130 further includes performing steps S131-S133:
s131, matching and combining the test command parameter library and the test environment parameter library according to the acquired serial numbers of the test command parameter library and the test environment parameter library to obtain the operation library.
Specifically, a test command parameter library and a test environment parameter library are opened. And searching the matched test command parameters and test environment parameters according to the serial numbers in the two libraries. For example, the sequence numbers in the test command parameter library may be used to match the sequence numbers in the test environment parameter library. A new runtime library, which may be a table or document, is created for storing the paired test command parameters and test environment parameters. And adding the matched test command parameters and the matched test environment parameters into the operation library. Ensuring that each parameter has a corresponding field or column, and recording in the runtime. Other parameter-related information, such as use case names, priorities, etc., can be attached to the runtime to correlate with the test command parameters and the test environment parameters. And (5) periodically maintaining and updating the operation library to ensure that the information in the operation library is consistent with the parameters actually required to be operated.
S132, generating a corresponding test script based on each parameter configured in the matched test command parameter library and the test environment parameter library in the operation library.
Specifically, according to the provided information, the matched test command parameters and test environment parameters in the runtime library can be used to generate corresponding test scripts. The previously created runtime library is opened. And searching the test command parameters and the test environment parameters of each pair in the operation library. And splicing and assembling the values of each pair of test command parameters and test environment parameters according to the required test script language format to generate corresponding test script codes. Checking whether the generated test script code accords with the expected format and logic or not, and ensuring that the parameter values in the test script code are correct and correct. The generated test script is saved to the appropriate location for subsequent use and execution.
S133, configuring each generated test script in a running library.
Specifically, a previously created runtime library is opened. A new field or column is created in the runtime for storing the relevant information of the test script. A header, such as "test script" or "script path" may be assigned to the field. For each paired test command parameter and test environment parameter, filling corresponding test script information into a test script field in the operation library. The path of the test script file may be provided or the script content copied and pasted directly into this field. Ensuring that the added test script information is saved in the operation library and is updated as necessary. The corresponding test script file or content can be found by the test script field in the runtime library when running the test. When new test scripts are generated or old test scripts are changed, test script fields in the operation library are updated in time so as to ensure that the recorded information is consistent with the state of the actual test scripts.
Specifically, as shown in fig. 6, after performing step S150, the method further includes performing steps S151-S153:
s151, generating a target application program operation command according to each parameter configured in the operation library.
Specifically, a previously created runtime library is opened. And searching for parameter configuration related to the target application program in the operation library. These parameters may include paths for the application, input file paths, output file paths, etc. For each parameter, the value of the parameter is obtained from the corresponding configuration field. The format and syntax of the value are ensured to meet the requirements of the target application program. And assembling the acquired parameter values according to the running command format of the target application program. Depending on the operating system and application requirements, it may be necessary to add quotation marks, delimiters, or other specific command structures. Checking whether the generated running command meets the requirement of the target application program or not, and ensuring that the parameter value, the path and other information in the running command are correct. The generated run command is saved to the appropriate location for subsequent use and execution.
S152, starting the target application program according to the target application program operation command.
Specifically, according to the generated target application program running command, a proper terminal or command prompt is opened in the operating system. The opening mode of the terminal or the command prompt may be different from one operating system to another, and the operation is performed according to the operating system document. Switching to the application catalog, if the catalog of the target application is different from the currently-located working catalog, then a 'cd' command (in most operating systems) is required to switch to the catalog of the application. When the 'cd' command is used, please provide the correct directory path. And inputting the generated running command of the target application program in the terminal or the command prompt, and pressing the enter key to execute. The format and grammar of the command are ensured to be correct, and meanwhile, whether the parameter value, the path and other information in the command are correct or not is noted. Once the run command of the target application is executed, it is necessary to wait for the application to complete the startup process. The start-up time may vary depending on the complexity of the application and the differences in computer performance. Once the application launch is complete, an interface, log, or other indication of the application may be checked to confirm whether it was successfully launched. Verification may take place in an appropriate manner depending on the type of application. If the target application has a specific start-up procedure or start-up option, it needs to follow the corresponding description or document to execute.
And S153, matching and connecting with the target application program according to the automatic test model.
Specifically, the function of the test model and the interface of the target application program to be connected are known according to the connection and matching between the automatic test model and the target application program. A determination is made as to which interactions and operations the test model needs to engage with the target application. An interface document or description of the target application is obtained. These documents typically provide detailed information about the APIs of the target application, command line parameters, available functions, and the like. And comparing the function of the test model with the interface of the target application program, and analyzing the matching degree between the test model and the interface of the target application program. It is determined whether the test model can meet the requirements of the target application and whether appropriate adjustments or extensions are required. And developing corresponding test scripts or codes according to the matching condition of the test model and the target application program. The test script or code should interact with the target application using the functionality and operations provided by the test model. It is ensured that the target application and its associated dependencies have been installed and configured in the test environment before the test is performed. This includes properly setting the path of the application, environment variables, file access rights, etc. And running the developed test script or code, interacting with the target application program and executing corresponding test operation. During the test, test results are recorded and collected, and necessary verification and assertion is performed. The results of the automated test are analyzed to check if the expected behavior and function is met. From the test results, it may be determined whether the connection between the test model and the target application was successful and whether further optimization or adjustment is required.
Specifically, as shown in fig. 7, step S160 further includes performing steps S161-S164:
and S161, monitoring the testing process of the target application program to acquire the running state parameters of the target application program.
Specifically, the target application typically generates a log file containing information about its running state and operation. The logging level may be configured to capture the required parameters in the log. By analyzing or parsing the log file, the required operating state parameters can be extracted. The running state of the target application is monitored using an appropriate monitoring tool. These tools may provide performance metrics, resource utilization, response time, etc. The parameters may optionally be configured and collected using an open source tool (e.g., prometheus, grafana) or a business monitoring tool. If the target application has a remote management interface or API, the operating state parameters may be obtained by querying the interface. For example, scripts or code may be written to perform remote API calls and obtain the required information. During testing, appropriate system commands and tools may be used to obtain the operating state parameters of the target application. For example, for Linux systems, command line tools such as ps, top, htop may be used to view process status, memory usage, CPU utilization, etc. Depending on the nature and requirements of the target application, the monitor may be custom-defined to monitor its operating state. This may be a stand-alone program or script that uses methods such as API calls, log analysis, etc. to collect and record key parameters.
S162, generating a corresponding target application program feedback command according to the running state parameters.
Specifically, the generating of the corresponding target application feedback command according to the running state parameter may be performed by: an appropriate feedback strategy is determined based on the desires and demands of the target application. This may include generating different feedback commands according to different operating state parameters, or triggering corresponding feedback based on specific conditions and rules. The operational state parameters obtained by monitoring, logging or other means are analyzed and processed. Custom scripts, programs, or tools may be used to parse and process these parameters to generate feedback commands as needed. And determining the condition for triggering the feedback command according to the analysis result of the operation state parameters. For example, if the value of a certain parameter exceeds a preset threshold value, or an abnormal situation occurs, it may be determined that a condition for triggering a feedback command is required. And generating a corresponding feedback command according to the feedback strategy and the triggering condition. This involves invoking the API interface of the target application, sending a specific request, modifying the configuration file, etc.
S163, selecting one of the operation libraries corresponding to the feedback command of the target application program as a target operation library according to the automatic test model.
Specifically, one of the operation libraries corresponding to the feedback command of the target application program is selected according to the automatic test model, and an appropriate test frame and programming language are selected according to the characteristics of the automatic test model and the target application program. Common test frameworks include Selenium, appium, JUnit, pyTest, etc., and programming languages may be Java, python, javaScript, etc. For the selected test framework and programming language, an available runtime or tool corresponding to the target application feedback command is found. These runtime libraries are typically developed and supported by communities or developers and may be found through channels such as search engines, open source code libraries, developer forums, and the like.
S164, configuring an operation script generated based on the actual cold start time length and the actual hot start time length in the target operation library.
Specifically, as shown in fig. 8, step S170 further includes performing step S171 and step S1711:
s171, judging whether the similarity between the functional parameters in the running script and the functional parameters in the test script is higher than the expected similarity preset in the expected test information of the target application program.
Specifically, if the similarity between the functional parameters in the running script and the functional parameters in the test script is higher than the expected similarity preset in the test expected information of the target application, step S1711 is executed:
S1711, generating evaluation result information.
Specifically, if the similarity between the functional parameters in the running script and the functional parameters in the test script is not higher than the expected similarity preset in the expected test information of the target application, generating a running command of the target application according to the parameters configured in the running library, that is, returning to the execution step S151:
s151, generating a target application program operation command according to each parameter configured in the operation library.
Specifically, to determine whether the similarity between the functional parameters in the running script and the functional parameters in the test script is higher than the expected similarity preset in the test expected information of the target application program, the following steps may be performed: 1. understanding test expected information of a target application: first, it is necessary to know what the test expected information of the target application is. Such expected information may include expected values of functional parameters, ranges, constraints, etc. 2. Analyzing functional parameters of the running script and the test script: the functional parameters used in the run script and test script are carefully analyzed. To what extent these parameters match the test expected information of the target application. 3. Comparing the similarity of the functional parameters: the functional parameters in the running script and the test script are compared with the test expected information of the target application program, and the similarity of the functional parameters and the test script is evaluated. The similarity may be determined taking into account the following aspects: whether the value of the functional parameter is consistent or close to the expected value in the expected information, matching of the range: whether the value range of the functional parameter meets the range requirement in the expected information or not, and the constraint conditions are satisfied: whether the functional parameter meets the constraints or limitations in the desired information. 4. Judging whether the similarity is higher than the expected similarity: and comparing the obtained similarity value with the expected similarity preset in the test expected information of the target application program. If the obtained similarity is higher than or equal to the expected similarity, the functional parameters in the operation script and the functional parameters in the test script are basically in accordance with the expected similarity, and the similarity can be considered to be higher. If the obtained similarity is lower than the expected similarity, the description needs to further adjust the running script or the test script to improve the matching degree of the functional parameters and the expected similarity of the target application program.
Through the steps, the similarity between the functional parameters in the running script and the functional parameters in the test script can be evaluated and compared with the expected similarity preset in the test expected information of the target application program. This can help determine whether further optimization of the running script or test script is needed to better meet the expected requirements.
Fig. 9 is a schematic block diagram of an application automation test device according to an embodiment of the present application. As shown in fig. 9, the present application further provides an application automation test device 100 corresponding to the above application automation test method. The application automation test device comprises a unit for executing the application automation test method, and the device can be configured in a desktop computer, a tablet computer, a portable computer, and other terminals. Specifically, referring to fig. 9, the application automation test device 100 includes an information acquisition unit 110, a parameter acquisition unit 120, a function library generation unit 130, a model generation unit 140, an application connection unit 150, a script generation unit 160, and a contrast verification unit 170:
the information acquisition unit is used for acquiring the test expected information of the target application program and configuring various parameters in the test expected information in a preset test command parameter library;
The parameter acquisition unit is used for acquiring the test environment parameters and configuring the test environment parameters in a preset test environment parameter library;
the function library generating unit is used for establishing an operation library based on the test command parameter library and the test environment parameter library, generating test scripts corresponding to the test expected information and the test environment parameters and configuring the test scripts in the operation library;
the model generating unit is used for establishing an automatic test model according to the test command parameter library, the environment information parameter library and the operation library;
the application program connection unit is used for carrying out matching connection on the automatic test model and the target application program;
the script generation unit is used for acquiring the running state parameters of the target application program, configuring the running state parameters in the running library and generating a corresponding running script;
and the comparison verification unit is used for comparing and verifying the running script and the test script to generate evaluation result information corresponding to the comparison verification result.
It should be noted that, as those skilled in the art can clearly understand, the specific implementation process of the above-mentioned application program automation test device and each unit may refer to the corresponding description in the foregoing method embodiment, and for convenience and brevity of description, the description is omitted here.
The above-described application automation test device may be implemented in the form of a computer program that is executable on a computer apparatus as shown in fig. 10.
Referring to fig. 10, fig. 10 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device 500 may be a terminal or a server, where the terminal may be an electronic device with a communication function, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant, and a wearable device. The server may be an independent server or a server cluster formed by a plurality of servers.
With reference to FIG. 10, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032 includes program instructions that, when executed, cause the processor 502 to perform an application automation test method.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The internal memory 504 provides an environment for the execution of a computer program 5032 in the non-volatile storage medium 503, which computer program 5032, when executed by the processor 502, causes the processor 502 to perform an application automation test method.
The network interface 505 is used for network communication with other devices. It will be appreciated by those skilled in the art that the structure shown in FIG. 10 is merely a block diagram of some of the structures associated with the present inventive arrangements and does not constitute a limitation of the computer device 500 to which the present inventive arrangements may be applied, and that a particular computer device 500 may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components.
It should be appreciated that in an embodiment of the application, the processor 502 may be a central processing unit (Central Processing Unit, CPU), the processor 502 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Those skilled in the art will appreciate that all or part of the flow in a method embodying the above described embodiments may be accomplished by computer programs instructing the relevant hardware. The computer program comprises program instructions, and the computer program can be stored in a storage medium, which is a computer readable storage medium. The program instructions are executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present application also provides a storage medium. The storage medium may be a computer readable storage medium. The storage medium stores a computer program, wherein the computer program includes program instructions. The program instructions, when executed by the processor, cause the processor to perform the steps of:
acquiring test expected information of a target application program, and configuring various parameters in the test expected information in a preset test command parameter library, wherein the test expected information comprises an expected cold start time length and an expected hot start time length; acquiring test environment parameters, and configuring the test environment parameters in a preset test environment parameter library; establishing an operation library based on the test command parameter library and the test environment parameter library, and generating test script configuration corresponding to the test expected information and the test environment parameters in the operation library; establishing an automatic test model according to the test command parameter library, the environment information parameter library and the operation library; matching and connecting the automatic test model with a target application program; acquiring running state parameters of a target application program, configuring the running state parameters in a running library and generating a corresponding running script, wherein the running state parameters comprise the actual cold start time length and the actual hot start time length; and comparing and verifying the running script and the test script to generate evaluation result information corresponding to the comparison and verification result.
The storage medium may be a U-disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk, or other various computer-readable storage media that can store program codes.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the device embodiments described above are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the application can be combined, divided and deleted according to actual needs. In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The integrated unit may be stored in a storage medium if implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application is essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a terminal, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application.
While the application has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (10)

1. An automatic test method for application programs is applied to an operating system configured by an intelligent terminal, and a target application program to be tested is configured in the operating system, and is characterized by comprising the following steps:
acquiring test expected information of the target application program, and configuring various parameters in the test expected information in a preset test command parameter library, wherein the test expected information comprises an expected cold start time length number and an expected hot start time length number;
acquiring test environment parameters, and configuring the test environment parameters in a preset test environment parameter library;
establishing an operation library based on the test command parameter library and the test environment parameter library, generating test scripts corresponding to the test expected information and the test environment parameters, and configuring the test scripts in the operation library;
establishing an automatic test model according to the test command parameter library, the environment information parameter library and the operation library;
matching and connecting the automatic test model with the target application program;
acquiring running state parameters of the target application program, configuring the running state parameters in the running library and generating corresponding running scripts, wherein the running state parameters comprise the actual cold start time length number and the actual hot start time length number;
And comparing and verifying the running script with the test script to generate evaluation result information corresponding to a comparison and verification result.
2. The method for automatically testing an application program according to claim 1, wherein the configuring each parameter in the test expected information in a preset test command parameter library includes:
generating a plurality of test cases based on the number of expected cold start durations shown in the test expected information;
sequentially arranging a plurality of test cases according to the priority;
respectively generating corresponding test command parameter libraries according to the sequencing results of the test cases;
and configuring the expected cold start duration number corresponding to each test case in a test command parameter library corresponding to the test case.
3. The method for automatically testing an application program according to claim 2, wherein the obtaining the test environment parameter and configuring the test environment parameter in a preset test environment parameter library includes:
acquiring and classifying test environment information to obtain a plurality of test environment parameters expected based on a test environment;
generating a plurality of corresponding environment assessment cases based on a plurality of combination relations of a plurality of the test environment parameters;
Sequentially arranging a plurality of environment evaluation cases according to priority and obtaining corresponding serial numbers;
respectively generating corresponding test environment parameter libraries according to the sequencing results of the environment evaluation cases;
and configuring the test environment parameters corresponding to the environment evaluation cases in a test environment parameter library corresponding to the environment evaluation cases.
4. The method for automatically testing an application program according to claim 3, wherein creating a runtime based on the test command parameter library and the test environment parameter library and generating test scripts corresponding to the test expected information and the test environment parameters is configured in the runtime, and comprises:
pairing and combining the test command parameter library and the test environment parameter library according to the acquired sequence number of the test command parameter library and the acquired sequence number of the test environment parameter library to obtain the operation library;
generating a corresponding test script based on each parameter configured in a matched test command parameter library and a test environment parameter library in the operation library;
and configuring each generated test script in the operation library.
5. The method for automated testing of an application according to claim 4, wherein said matching the connection with the target application according to the automated test model comprises:
Generating a target application program operation command according to each parameter configured in the operation library;
starting the target application program according to the target application program running command;
and matching and connecting with the target application program according to the automatic test model.
6. The method for automatically testing an application program according to claim 5, wherein the obtaining the running state parameter of the target application program is configured in the running library and generates a corresponding running script, the running state parameter includes an actual cold start time long number and an actual hot start time long number, and the method includes:
monitoring the test process of the target application program to obtain the running state parameters of the target application program;
generating a corresponding target application program feedback command according to the running state parameters;
selecting one of the operation libraries corresponding to the feedback command of the target application program as a target operation library according to the automatic test model;
and configuring the running script generated based on the actual cold start time duration and the actual hot start time duration in the target running library.
7. The method for automated testing of an application program according to claim 6, wherein comparing the running script with the test script, generating evaluation result information according to a comparison verification result, comprises:
Judging whether the similarity between the functional parameters in the running script and the functional parameters in the test script is higher than the expected similarity preset in the test expected information of the target application program;
if the similarity between the functional parameters in the running script and the functional parameters in the test script is higher than the expected similarity preset in the test expected information of the target application program, generating evaluation result information;
and if the similarity between the functional parameters in the operation script and the functional parameters in the test script is not higher than the expected similarity preset in the test expected information of the target application program, generating a target application program operation command according to each parameter configured in the operation library.
8. An application automation test device comprising means for performing the method of any one of claims 1-7:
the information acquisition unit is used for acquiring the test expected information of the target application program and configuring various parameters in the test expected information in a preset test command parameter library;
the parameter acquisition unit is used for acquiring the test environment parameters and configuring the test environment parameters in a preset test environment parameter library;
The function library generating unit is used for establishing an operation library based on the test command parameter library and the test environment parameter library, generating test scripts corresponding to the test expected information and the test environment parameters and configuring the test scripts in the operation library;
the model generating unit is used for establishing an automatic test model according to the test command parameter library, the environment information parameter library and the operation library;
the application program connection unit is used for carrying out matching connection on the automatic test model and the target application program;
the script generation unit is used for acquiring the running state parameters of the target application program, configuring the running state parameters in the running library and generating corresponding running scripts;
and the comparison verification unit is used for comparing and verifying the running script and the test script to generate evaluation result information corresponding to the comparison verification result.
9. A computer device, characterized in that it comprises a memory on which a computer program is stored and a processor which, when executing the computer program, implements the method according to any of claims 1-7.
10. A computer readable storage medium, characterized in that the storage medium stores a computer program comprising program instructions which, when executed by a processor, can implement the method of any of claims 1-7.
CN202311031410.8A 2023-08-15 2023-08-15 Automatic testing method and device for application program, computer equipment and storage medium Pending CN117056221A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311031410.8A CN117056221A (en) 2023-08-15 2023-08-15 Automatic testing method and device for application program, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311031410.8A CN117056221A (en) 2023-08-15 2023-08-15 Automatic testing method and device for application program, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117056221A true CN117056221A (en) 2023-11-14

Family

ID=88660384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311031410.8A Pending CN117056221A (en) 2023-08-15 2023-08-15 Automatic testing method and device for application program, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117056221A (en)

Similar Documents

Publication Publication Date Title
US10552301B2 (en) Completing functional testing
US11281570B2 (en) Software testing method, system, apparatus, device medium, and computer program product
US11762717B2 (en) Automatically generating testing code for a software application
US9032371B2 (en) Method and apparatus for automatic diagnosis of software failures
US10067858B2 (en) Cloud-based software testing
US9292416B2 (en) Software development kit testing
US8839202B2 (en) Test environment managed within tests
US9069902B2 (en) Software test automation
CN106940695B (en) Data source information verification method and device
CN111124919A (en) User interface testing method, device, equipment and storage medium
CN109885480A (en) A kind of automatic interface compatibility test method and device based on debugging bridge
CN117009243A (en) Chip performance automatic test method, device, computer equipment and storage medium
CN112685312A (en) Test case recommendation method and device for uncovered codes
CN116841865A (en) Visual test method and device, electronic equipment and storage medium
US9292422B2 (en) Scheduled software item testing
CN113742215A (en) Method and system for automatically configuring and calling test tool to perform test analysis
CN115617668A (en) Compatibility testing method, device and equipment
CN117056221A (en) Automatic testing method and device for application program, computer equipment and storage medium
CN113986263A (en) Code automation test method, device, electronic equipment and storage medium
US10296449B2 (en) Recording an application test
CN113626307A (en) Data verification method and device based on K8S container platform
CN111813665A (en) Big data platform interface data testing method and system based on python
CN117539785A (en) CPE interface automatic test method, device, computer equipment and storage medium
CN113918458A (en) Script processing method, server and storage medium
CN114780419A (en) Batch measuring range sequence testing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination