CN111124919A - User interface testing method, device, equipment and storage medium - Google Patents

User interface testing method, device, equipment and storage medium Download PDF

Info

Publication number
CN111124919A
CN111124919A CN201911347813.7A CN201911347813A CN111124919A CN 111124919 A CN111124919 A CN 111124919A CN 201911347813 A CN201911347813 A CN 201911347813A CN 111124919 A CN111124919 A CN 111124919A
Authority
CN
China
Prior art keywords
test
test case
user interface
target
case set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911347813.7A
Other languages
Chinese (zh)
Inventor
吴运娣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Nuonuo Network Technology Co ltd
Original Assignee
Zhejiang Nuonuo Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Nuonuo Network Technology Co ltd filed Critical Zhejiang Nuonuo Network Technology Co ltd
Priority to CN201911347813.7A priority Critical patent/CN111124919A/en
Publication of CN111124919A publication Critical patent/CN111124919A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites

Abstract

The application discloses a method, a device, equipment and a medium for testing a user interface, wherein the method comprises the following steps: setting a test case in advance according to a page control of a user interface, an operation behavior of the page control and an execution result verification mode; setting a test case set corresponding to each test scene by using the test cases; acquiring a target test instruction for testing a user interface; and calling a corresponding target test case set in the test case set according to the target test instruction and the preset execution script, running each test case in the target test case set, testing the user interface, and obtaining a test result file. The method can reduce the requirement on the skill level of the tester, reduce the consumption of manpower resources and improve the testing efficiency; the reusability of the test case set and the preset execution script is strong, so that the test time can be greatly saved; the large-batch test requirements can be realized by calling the execution script and the test case set, and the test efficiency is further improved.

Description

User interface testing method, device, equipment and storage medium
Technical Field
The present invention relates to the field of UI testing, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for testing a user interface.
Background
With the rapid development of computer technology, the testing requirements for the user interface of the terminal device are increasing at present. In the prior art, a tester writes a corresponding test program for each test scenario, runs the test program to test a user interface of a terminal device, and obtains a corresponding test result file. Obviously, the user interface testing method in the prior art needs manual operation, so that the testing process is complicated and time-consuming; moreover, the compiling difficulty of the test program for testing the user interface is high, and the requirement on the skill level of a tester is high, so that the manpower requirement is further increased; moreover, the manual detection mode cannot meet the requirement of mass testing.
Therefore, how to improve the testing efficiency of the user interface, reduce the consumption of human resources, and meet the requirement of mass testing is a technical problem that needs to be solved by those skilled in the art at present.
Disclosure of Invention
In view of this, the present invention provides a method for testing a user interface, which can improve the efficiency of testing the user interface, reduce the consumption of human resources, and meet the requirements of mass tests; another object of the present invention is to provide a testing apparatus, a testing device and a computer readable storage medium for a user interface, all of which have the above advantages.
In order to solve the above technical problem, the present invention provides a method for testing a user interface, comprising:
setting a test case in advance according to a page control of a user interface, an operation behavior of the page control and an execution result verification mode;
setting a test case set corresponding to each test scene by using the test cases;
acquiring a target test instruction for testing the user interface;
and calling a corresponding target test case set in the test case set according to the target test instruction and a preset execution script, running each test case in the target test case set, and testing the user interface to obtain a test result file.
Preferably, further comprising:
setting common keywords by using the same page control in different test scenes and the same operation behavior of the page control;
correspondingly, the process of setting the test case in advance according to the page control of the user interface, the operation behavior of the page control and the execution result verification mode specifically includes:
and setting the test case in advance according to the positioning keywords of the page control of the user interface, the operation keywords of the operation behavior of the page control, the common keywords and the execution result verification keywords.
Preferably, when the user interface is tested to obtain a test result file by running each test case in the target test case set, the method further includes:
and sending out corresponding prompt information when the abnormal test result exists in the test result file.
Preferably, when the user interface is tested to obtain a test result file by running each test case in the target test case set, the method further includes:
and calling a screenshot script to perform screenshot on the current test operation when the abnormal test result exists in the test result file.
Preferably, after the calling a target test case set corresponding to the test case set according to the target test instruction and a preset execution script, running each test case in the target test case set, testing the user interface, and obtaining a test result file, the method further includes:
and carrying out statistical analysis on the test result file to obtain an analysis report.
Preferably, after the performing the statistical analysis on the test result file to obtain an analysis report, the method further includes:
and sending the test result file and/or the analysis report to a target client.
Preferably, the process of calling a target test case set corresponding to the test case set according to the target test instruction and a preset execution script, running each test case in the target test case set, and testing the user interface to obtain a test result file specifically includes:
calling a corresponding target test case set in the test case set according to the target test instruction and the preset execution script;
if the calling of the target test case set fails or the execution of the test cases in the target test case set fails, calling the target test case set again or executing the test cases again after a preset time length;
otherwise, testing the user interface by using each test case in the target test case set to obtain the test result file.
In order to solve the above technical problem, the present invention further provides a testing apparatus for a user interface, including:
the test case setting module is used for setting a test case in advance according to a page control of a user interface, an operation behavior of the page control and an execution result verification mode;
the test case set setting module is used for setting a test case set corresponding to each test scene by using the test cases;
the instruction acquisition module is used for acquiring a target test instruction for testing the user interface;
and the operation test module is used for calling a corresponding target test case set in the test case set according to the target test instruction and a preset execution script, operating each test case in the target test case set, testing the user interface and obtaining a test result file.
Preferably, further comprising:
the public keyword setting module is used for setting public keywords by using the same page control and the same operation behavior of the page control in different test scenes;
correspondingly, the test case setting specifically includes:
and the test case setting sub-module is used for setting the test case in advance according to the positioning key words of the page control of the user interface, the operation key words of the operation behaviors of the page control, the public key words and the execution result verification key words.
Preferably, further comprising:
and the prompt module is used for sending out corresponding prompt information when the abnormal test result exists in the test result file.
Preferably, further comprising:
and the screenshot module is used for calling a screenshot script to screenshot the current test operation when the abnormal test result exists in the test result file.
Preferably, further comprising:
and the analysis module is used for carrying out statistical analysis on the test result file to obtain an analysis report.
Preferably, further comprising:
and the sending module is used for sending the test result file and/or the analysis report to a target client.
In order to solve the above technical problem, the present invention further provides a testing apparatus for a user interface, including:
a memory for storing a computer program;
and the processor is used for realizing the steps of any one of the user interface testing methods when the computer program is executed.
In order to solve the above technical problem, the present invention further provides a computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the steps of any one of the above user interface testing methods.
The invention provides a test method of a user interface, which sets a test case in advance according to a page control of the user interface, an operation behavior of the page control and an execution result verification mode; then, setting a test case set corresponding to each test scene by using the test cases; therefore, when the user interface test is needed, the corresponding target test case set in the test case set can be called according to the target test instruction and the preset execution script only by inputting the target test instruction, the user interface test is realized by running each test case in the target test case set, and the test result file is obtained. Therefore, compared with the method for realizing the user interface test through manual execution in the prior art, the method realizes the automatic test of the user interface by utilizing the preset test case set and the preset execution script, can reduce the requirement on the skill level of testers, reduces the consumption of manpower resources and improves the test efficiency; the reusability of the test case set and the preset execution script is strong, so that the test time can be greatly saved; in addition, different user interfaces can be tested simultaneously by calling the execution script and the test case set, so that the large-batch test requirements are met, and the test efficiency is further improved.
In order to solve the technical problem, the invention also provides a testing device, equipment and a computer readable storage medium of the user interface, which have the beneficial effects.
Drawings
In order to more clearly illustrate the embodiments or technical solutions of the present invention, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a block diagram of a user interface based on an AppliumForMac client;
FIG. 2 is a flowchart of a method for testing a user interface according to an embodiment of the present invention;
FIG. 3 is a block diagram of a testing apparatus for a user interface according to an embodiment of the present invention;
fig. 4 is a structural diagram of a testing apparatus of a user interface according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The core of the embodiment of the invention is to provide a test method of a user interface, which can improve the test efficiency of the user interface, reduce the consumption of manpower resources and meet the requirement of mass tests; another core of the present invention is to provide a testing apparatus, a device and a computer readable storage medium for a user interface, all having the above-mentioned advantages.
In order that those skilled in the art will better understand the disclosure, the invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
It should be noted that, in actual operation, a test environment for implementing the test method of the user interface provided in the present application needs to be deployed first. In addition, the embodiment provides a specific test environment deployment scheme, and in other embodiments, the test environment deployment scheme may be adaptively adjusted according to actual needs and advances in computer technology, which is not specifically limited in the embodiment.
Specifically, the deployment test environment includes an operating system configuration of the terminal device, which is described in this embodiment by taking a user interface of a Mac OS computer as an example, and the Mac OS X version is 10.7 or more; the Mac OS computer needs to be provided with Xcode, the version of the Xcode is 7.2.1 or more, and the Xcode is used for compiling and packaging the app for testing; installing Python and Pycharm: testing a framework development language Python, wherein an editor is Pycharm; installation of WebDriver and Selenium: the device is used for simulating the operation behaviors carried out by a mouse and a keyboard of a user; js: version is greater than 10, the tool is the basic dependency environment for the AppiumForMac operation; mounting an apium-mac-driver: providing an appium connection driver aiming at a Mac system, acquiring Mac platform information, and creating a session, wherein the default port number is 4622; app formac.app client and server are installed: it is necessary to acquire the accessibility rights of the system to the application and open the service. The Apium for Mac may control the native user interface of the Mac application using the Selenium/WebDriver and OS accessibility APIs.
Then, an automated testing framework is set up. Specifically, fig. 1 is a schematic diagram of a framework of a user interface based on an appumformac client; the framework includes a configuration layer, a driver layer, and a business layer. Specifically, the configuration layer comprises an environment check file, an AppiaumForMac service file and a Mac App configuration. The environment check file is used for checking whether a test environment necessary for running the test method of the user interface is installed or not, and if the test environment information is correct, the next operation can be carried out; and if the test environment information is incorrect, giving an abnormal prompt for quitting the test. The application ForMac service file is used for packaging the apple-mac-driver, initializing the drive configuration and providing a drive for connecting the automation framework and the equipment. It should be noted that before the whole test case set is started, a connection or a connection pool (the connection pool corresponds to multiple terminal devices) is established, one MyDriver class is customized, then the apium-mac-Driver of the framework is used as a built-in object of the Driver, and the Driver is controllable, because the Driver needs to consume resources once in connection, and after the Driver is encapsulated, the connection time and the number can be controlled in the framework. When the test case set is operated, session is established, and the appium-mac-driver service is started, wherein the initial port is 4622, so that the connection between the appium ForMac service and the App is established, and each service is connected with only one terminal device. Closing the port after the operation is finished, and releasing the service and the port; and calling a Mac terminal lsof-i command to detect each port to be detected, and simultaneously automatically allocating different ports to each terminal device in the parallel test. The Mac App configuration is used for packaging related configurations of Mac equipment, App package name configuration and initialization drive configuration; calling an Xcode to automatically compile and pack the Xcode into an App and starting the App; when the execution script is operated, the ApigumForMac server reads the configuration file and determines the configuration information. If the configuration information is correct, the next step of operation can be carried out to read the information of the time measurement case set for testing the user interface; and if the configuration information is incorrect, giving an exception prompt for exiting the test.
Specifically, the driver layer encapsulation needs to call a common method base class of the driver layer; common operations of element positioning operation, gesture, sliding, screenshot and waiting are encapsulated in a driving layer, and calling of a service layer is facilitated.
The service layer calls a Python API interface by adopting a Python + PageObject mode, sets program codes as corresponding files for the positioning of a page control of a user interface and the operation behavior of the page control respectively, and encapsulates the program codes into corresponding page classes, and the test cases only contain the service program codes, so that the service is separated from the data, and the scene and the execution sequence of the scene are freely customized.
Therefore, when the user interface test is needed, the test of the user interface is realized based on the pre-deployed test environment and the built automatic test framework.
Fig. 2 is a flowchart of a method for testing a user interface according to an embodiment of the present invention. As shown in fig. 2, a method for testing a user interface includes:
s10: setting a test case in advance according to a page control of a user interface, an operation behavior of the page control and an execution result verification mode;
s20: and setting a test case set corresponding to each test scene by using the test cases.
Specifically, in this embodiment, a page control in the user interface is first obtained, and then a corresponding positioning keyword is set for the page control. The positioning keywords may be elements AXPath, text, and the like, and are used to identify a path and a name of a uniquely corresponding page control, and the plurality of positioning keywords form a page element management class, and the page element management class may be updated according to actual requirements. Specifically, the operation behavior of the page control includes: clicking, inputting, sliding, screenshot, mouse, keyboard events and the like, setting element operation files corresponding to the implementation methods of the operation behaviors, setting corresponding operation keywords for the corresponding element operation files, forming a common method library by a plurality of operation keywords, updating the operation keywords in the common method library according to actual requirements, and enabling each operation keyword to correspond to one operation step.
In actual operation, different verification methods may be required for execution results (actual operation data) corresponding to different test operations, where the execution result verification methods include: the method comprises the steps of detecting the display state of actual operation data, comparing the numerical value of the actual operation data, judging the numerical value accuracy, judging whether a searched page control exists, whether searched page data exists, whether an interface is correct after jumping and the like, so that execution result check files for checking different actual operation data are preset, corresponding execution result check keywords are set according to different execution result check files, the corresponding execution result check files are called according to the execution result check keywords in a test case, and the actual operation data corresponding to operation behaviors in the current test case are checked.
Specifically, according to the actual test behavior, the corresponding positioning keywords and operation keywords are arranged in the yaml file according to the operation step sequence and the execution result verification keywords are arranged in the yaml file according to the execution result verification sequence, and the incidence relation between each keyword and the storage path of the file of the corresponding service program code is set, so that the calling is facilitated, and the corresponding test case is obtained. That is to say, in this embodiment, yaml is adopted to configure a multi-keyword driver to write a test case, multiple types of keywords are combined into one test case, and then, a corresponding page control or a file of operation behavior/service program code for verifying an execution result may be called according to the keywords in the test case, so as to execute a function corresponding to the file, thereby implementing a test operation of a user interface. Therefore, in the embodiment, the keywords can be combined according to actual requirements to obtain different test cases, and the service program codes corresponding to the keywords have high reusability, strong reusability and easy maintenance. In addition, as a preferred embodiment, a waiting time duration may be added after each operation keyword, and a specific value of the waiting time duration may be set according to an actual requirement, which is not limited herein. By further adding the waiting time, the problem that the test case execution fails due to delay of the page loading control caused by network delay can be solved.
It should be noted that another advantage of storing the positioning keywords in the preset yaml file is that when the page control of the user interface is changed, only the positioning keywords of the corresponding page control in the yaml file need to be modified, and the corresponding test case does not need to be modified, so that the maintenance difficulty and complexity are reduced.
In addition, it should be noted that the test case may further include test data, where the test data refers to information used for simulating input in the test process, such as a user name and a password that need to be input in the login process, and a calculation object that needs to be input in the calculation process, which are all obtained through the test data. The test data may be data preset in a database, or may be random numbers generated by a preset random algorithm. In addition, in order to ensure the accuracy and diversity of the test data and implement the parametric test, the test data may be adjusted according to actual requirements, such as performing operations such as adding and deleting on data in the database, or modifying a random algorithm, which is not limited in this embodiment. In addition, after the test operation is completed, the test data can be further cleaned and destroyed, so that the influence of the used test data on the subsequent test process is avoided.
In this embodiment, the unified identifier of the test case is testinfo, which includes: test case number, test case title, test case description, and the like. Specifically, after the test cases are obtained, the test cases are combined according to the actual test scenario and the actual operation sequence to obtain a test case set (i.e., scenario file). In actual operation, the test system can be configured into a single case set, a function case set, a smoking case set, a regression case set, a full-function case set and the like according to needs, and the specific type of the test case set is not limited in this embodiment.
S30: acquiring a target test instruction for testing a user interface;
s40: and calling a corresponding target test case set in the test case set according to the target test instruction and the preset execution script, running each test case in the target test case set, testing the user interface, and obtaining a test result file.
Specifically, when the user interface needs to be tested, firstly, a target test instruction is obtained, a test environment corresponding to the user interface is detected, when the test environment meets conditions, an apple-mac-driver is started, an apple ForMac server is started, and connection between the apple ForMac server and an App to be tested on the user interface is established; and reading the configuration information of the terminal equipment and creating the session.
Then, calling a corresponding target test case set in the test case set according to the target test instruction and the preset execution script, analyzing each test case in the target test case set file, transmitting a corresponding command in the test case to the AppliumForMac client through unittest, and sending the received command and the determined configuration information to the AppliumForMac server by the AppliumForMac client. Specifically, the App ForMac client analyzes and traverses the layout file of the App to be tested according to the positioning keywords in the test case, and acquires the hierarchy view tree structure through the Access Instrument carried by the Xcode so as to acquire the page control in the App user interface to be tested.
The method comprises the steps that an App ForMac server monitors a port (default port 4622), receives commands sent by an App ForMac client, analyzes the commands after determining an App to be tested corresponding to a target test instruction, converts the commands into instruction forms which can be understood by the App to be tested, and sends the commands to the App to be tested; the method includes the steps that an App to be tested receives an instruction, a system macOS access API interface is called through an inner layer frame PFAsstatic Framework according to the instruction, the macOS mouse/kbd API interface is called through WebDrivecomommand handles to receive the instruction and execute the instruction, and at the moment, operation behaviors of a user are simulated on the App to be tested, so that the execution condition of a test case can be visually seen. Specifically, the instructions of clicking, moving and text inputting are sent to the App to be tested by calling the appumformac APIs such as click (element), move _ to _ element (element), send _ keys (values). The values involved in this instruction are test data derived from the test case. The AppiumForMac client will generate Mac OS native keyboard and mouse events from the operation and call the selenium method to handle mouse and keyboard operations. Wherein, Actionchains is used for specially processing mouse-related operations in the selenium; the Keys () class provides a method for simulating almost all Keys on a keyboard. Program codes for simulating operation behaviors of users are packaged into a BasePage base class, a page class operated by each test case inherits the BasePage base class, elements in the page are managed through a driver, and the operation behaviors in the page are packaged into a corresponding method.
And when the operation behaviors in the test case are executed, obtaining actual operation data, calling a corresponding execution result verification file by using the execution result verification keywords in the test case, and performing result verification on the actual operation data. And after the verification is finished, obtaining a test result file, returning the test result file to the AppiaumForMac server by the App to be tested, and transmitting the test result file to the AppiaumForMac client by the AppiaumForMac server.
In actual operation, a corresponding test log is generally output according to an actual test condition, for example, time for executing a user interface test, operation detailed information, and the like, the obtained test log is stored in a specified position, and an error log is stored in a test result file at the same time. In addition, in actual operation, the output level of the test log can be further set to judge the importance of the test log. In addition, in actual operation, a test log can be printed out in the test process according to actual requirements, so that the test condition can be checked in real time.
It should be noted that, this embodiment may implement a regression test of a part of stable functions, verify whether an added function affects an original function, and may automatically execute when unattended, which reduces the cost of the regression test and improves the test efficiency.
According to the test method of the user interface provided by the embodiment of the invention, a test case is set in advance according to a page control of the user interface, an operation behavior of the page control and an execution result verification mode; then, setting a test case set corresponding to each test scene by using the test cases; therefore, when the user interface test is needed, the corresponding target test case set in the test case set can be called according to the target test instruction and the preset execution script only by inputting the target test instruction, the user interface test is realized by running each test case in the target test case set, and the test result file is obtained. Therefore, compared with the method for realizing the user interface test through manual execution in the prior art, the method realizes the automatic test of the user interface by utilizing the preset test case set and the preset execution script, can reduce the requirement on the skill level of testers, reduces the consumption of manpower resources and improves the test efficiency; the reusability of the test case set and the preset execution script is strong, so that the test time can be greatly saved; in addition, different user interfaces can be tested simultaneously by calling the execution script and the test case set, so that the large-batch test requirements are met, and the test efficiency is further improved.
On the basis of the foregoing embodiment, the present embodiment further describes and optimizes the technical solution, and specifically, in the present embodiment, the present embodiment further includes:
setting common keywords by using the same page control and the same operation behavior of the page control in different test scenes;
correspondingly, the process of setting the test case in advance according to the page control of the user interface, the operation behavior of the page control and the execution result verification mode specifically comprises the following steps:
and setting a test case in advance according to the positioning keywords of the page control of the user interface, the operation keywords of the operation behavior of the page control, the public keywords and the execution result verification keywords.
In this embodiment, it is further considered that, in different test scenarios, the same operation step may exist, where the same operation step indicates that the page controls and the operation behaviors corresponding to the operations are the same, for example, in different test scenarios, an operation of inputting a user name and a password to log in may be required to be performed, and the page controls and the operation behaviors corresponding to the operation of inputting the user name and the password to log in are the same, so that the positioning keyword and the operation keyword corresponding to the same operation step in different test scenarios are set as the common keyword in this embodiment. And then, setting a test case by using the positioning keywords, the operation keywords, the public keywords and the execution result check keywords.
Therefore, according to the method provided by the embodiment, the setting of repeated positioning keywords and/or operation keywords can be reduced by setting the common keywords according to the same page controls corresponding to different test scenes and the same operation behaviors of the page controls, the test cases are simplified, and the efficiency of setting the test cases is improved.
On the basis of the foregoing embodiment, this embodiment further describes and optimizes the technical solution, and specifically, in this embodiment, when running each test case in the target test case set and testing the user interface to obtain the test result file, the method further includes:
and when the abnormal test result exists in the test result file, sending out corresponding prompt information.
Specifically, in the present embodiment, in the process of running each test case in the target test case set and testing the user interface, since each time the actual running data is obtained, the corresponding preset execution result check file is called according to the execution result check keyword, it is determined in time whether the actual running data is normal; and if the fact that the actual operation data is abnormal is determined, namely, the fact that the abnormal test result exists in the test result file is indicated, sending out corresponding prompt information.
Specifically, the prompt device can be preset, when the abnormal test result exists in the test result file, the prompt device is triggered to send out corresponding prompt information, the prompt device can be specifically a buzzer, an indicator light/display and the like, the prompt device is controlled to operate and work to send out sound/light/display characters, the condition that the abnormal test result exists in the current test result of the user is visually prompted, and therefore the use experience of the user can be further improved.
On the basis of the foregoing embodiment, this embodiment further describes and optimizes the technical solution, and specifically, in this embodiment, when running each test case in the target test case set and testing the user interface to obtain the test result file, the method further includes:
and calling a screenshot script to perform screenshot on the current test operation when the abnormal test result exists in the test result file.
Specifically, in this embodiment, in the process of running each test case in the target test case set and testing the user interface, since each time actual running data is obtained, a corresponding preset execution result check file is called according to an execution result check keyword, and whether the actual running data is normal or not is judged in time; and if the fact that the actual operation data is abnormal is determined, namely the fact that the abnormal test result exists in the test result file is indicated, the preset screenshot script is called, the screenshot is performed on the current test operation by operating the screenshot script, and the operation state of the user interface corresponding to the current test operation is saved.
It should be noted that when an abnormal test result exists in the test result file, that is, when the user interface is tested to be abnormal, the screenshot script is called to perform screenshot on the current test operation, and the screenshot file obtained by screenshot is stored in the test result file, so that the user can conveniently check and analyze the specific test condition, that is, the user can conveniently determine the cause of the abnormal fault and obtain a solution more accurately according to the test result file and the corresponding screenshot file, the user can more conveniently perform maintenance operation, and the use experience of the user is further improved.
In addition, it should be noted that, in the actual operation, the corresponding screenshot operation may also be executed by responding to the operation instruction of the user, that is, in the actual operation, on one hand, the screenshot operation may be triggered according to the abnormal test result in the test process, and on the other hand, the screenshot operation may be performed by directly responding to the operation instruction of the user, so as to further improve the user experience of the user.
On the basis of the foregoing embodiment, this embodiment further describes and optimizes the technical solution, and specifically, in this embodiment, after calling a target test case set corresponding to the test case set according to the target test instruction and a preset execution script, running each test case in the target test case set, testing the user interface, and obtaining a test result file, the method further includes:
and carrying out statistical analysis on the test result file to obtain an analysis report.
Specifically, in this embodiment, after the test result file is obtained, the test result file is further subjected to statistical analysis to obtain an analysis report.
Specifically, the process of performing statistical analysis on the test result file specifically includes: the method comprises the steps of relevant configuration of terminal equipment, App version number to be tested, total case execution number, case passing number, failure case number, error reporting record, detail of testing operation steps, log, screenshot and the like. Moreover, the total case number can be further executed, the failure case number can be converted into a pie chart through the case number, and the occupation ratio of the total case number, the pass case number and the failure case number can be visually checked through the pie chart. It should be noted that, in other embodiments, the test result file may also be counted into a tree chart or a line chart according to the actual requirement of the user, which is not limited in this embodiment.
Therefore, the embodiment further performs statistical analysis on the test result file to obtain an analysis report, so that the user can check the test result of the user interface more intuitively, and the user experience is further improved.
On the basis of the foregoing embodiment, the present embodiment further describes and optimizes the technical solution, and specifically, in the present embodiment, after performing statistical analysis on the test result file to obtain an analysis report, the method further includes:
and sending the test result file and/or the analysis report to the target client.
In this embodiment, after the analysis report is obtained, the test result file and/or the analysis report is further sent to the target client. Specifically, the information may be sent by a mail, a short message, or the like, which is not limited in this embodiment.
It can be understood that, by further sending the test result file and/or the analysis report to the target client, other users can conveniently view the test result of the user interface from the target client, and the use experience of the user is further improved.
It should be noted that, in the actual operation, the test result file/analysis report may be further saved to the database for subsequent viewing. In some other practical operations, in order to avoid crosstalk between the test result file/analysis report and storage space occupied by the test result file/analysis report, and the like, after the test result file/analysis report is sent to the target client, the test result file/analysis report may be further cleaned and destroyed, so that accuracy and convenience of testing the user interface are further improved.
On the basis of the foregoing embodiment, this embodiment further describes and optimizes the technical solution, and specifically, in this embodiment, a process of calling a target test case set corresponding to a test case set according to a target test instruction and a preset execution script, running each test case in the target test case set, testing a user interface, and obtaining a test result file specifically includes:
calling a corresponding target test case set in the test case set according to the target test instruction and a preset execution script;
if the calling of the target test case set fails or the execution of the test cases in the target test case set fails, calling the target test case set again or executing the test cases again after a preset time length;
otherwise, testing the user interface by using each test case in the target test case set to obtain a test result file.
Specifically, in this embodiment, when the test on the user interface is implemented according to the target test instruction and the target test case set, it is further considered that, in the actual operation process, a case that the calling of the target test case set fails or the execution of the test cases in the target test case set fails may be caused by a network or system failure or the like. For such a situation, the embodiment further presets a preset time duration, and then restarts the operation of calling the target test case set or restarts the operation of performing the user interface test by using the test cases in the target test case set after the preset time duration is set at intervals when the failure of calling the target test case set or the failure of executing the test cases in the target test case set is detected.
In addition, in the actual operation, the maximum number of times of automatic retry can be further set, after the automatic retry is failed in operation, the current retry number is updated, and when the current retry number reaches the maximum number of times of automatic retry, the retry is stopped and an error is reported, so that the dead cycle of automatic retry is avoided, and the efficiency of user interface test is further improved.
It should be noted that, in actual operation, if the preset interval duration is 0 and the maximum number of times of automatic retry is 1, it may be implemented by calling an api in a unittest frame of python, when a failure of the first test procedure is detected, performing automatic rerun once immediately, and after executing the rerun operation (regardless of whether the execution is successful), continuing to execute the operation of the next test case.
Therefore, the embodiment provides a mechanism for automatically retrying the test failure, and the automatic retrying mechanism automatically operates again after the preset duration of the test failure, so that the test failure of the user interface caused by the conditions of slow page loading and the like due to network reasons is avoided, and the success rate and the stability of executing the target test case set are effectively improved.
The above detailed description is made on the embodiment of the method for testing the user interface provided by the present invention, and the present invention also provides a device, an apparatus, and a computer-readable storage medium for testing the user interface corresponding to the method.
Fig. 3 is a structural diagram of a testing apparatus for a user interface according to an embodiment of the present invention, and as shown in fig. 3, the testing apparatus for a user interface includes:
the test case setting module 31 is configured to set a test case in advance according to a page control of the user interface, an operation behavior on the page control, and an execution result verification manner;
a test case set setting module 32, configured to set, by using the test cases, test case sets corresponding to the test scenes;
an instruction obtaining module 33, configured to obtain a target test instruction for testing a user interface;
and the operation test module 34 is configured to call a corresponding target test case set in the test case set according to the target test instruction and the preset execution script, operate each test case in the target test case set, test the user interface, and obtain a test result file.
The test device of the user interface provided by the embodiment of the invention has the beneficial effects of the test method of the user interface.
As a preferred embodiment, further comprising:
the public keyword setting module is used for setting public keywords by using the same page control and the same operation behavior of the page control in different test scenes;
correspondingly, the test case setting module 31 specifically includes:
and the test case setting submodule is used for setting a test case in advance according to the positioning keywords of the page control of the user interface, the operation keywords of the operation behavior of the page control, the public keywords and the execution result verification keywords.
As a preferred embodiment, further comprising:
and the prompt module is used for sending out corresponding prompt information when the abnormal test result exists in the test result file.
As a preferred embodiment, further comprising:
and the screenshot module is used for calling the screenshot script to perform screenshot on the current test operation when the abnormal test result exists in the test result file.
As a preferred embodiment, further comprising:
and the analysis module is used for carrying out statistical analysis on the test result file to obtain an analysis report.
As a preferred embodiment, further comprising:
and the sending module is used for sending the test result file and/or the analysis report to the target client.
Fig. 4 is a structural diagram of a testing apparatus for a user interface according to an embodiment of the present invention, and as shown in fig. 4, the testing apparatus for a user interface includes:
a memory 41 for storing a computer program;
a processor 42 for implementing the steps of the method of testing a user interface as described above when executing the computer program.
The test equipment of the user interface provided by the embodiment of the invention has the beneficial effects of the test method of the user interface.
In order to solve the above technical problem, the present invention further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the method for testing the user interface.
The computer-readable storage medium provided by the embodiment of the invention has the beneficial effect of the test method of the user interface.
The method, apparatus, device and computer readable storage medium for testing a user interface provided by the present invention are described in detail above. The principles and embodiments of the present invention are explained herein using specific examples, which are set forth only to help understand the method and its core ideas of the present invention. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.

Claims (10)

1. A method for testing a user interface, comprising:
setting a test case in advance according to a page control of a user interface, an operation behavior of the page control and an execution result verification mode;
setting a test case set corresponding to each test scene by using the test cases;
acquiring a target test instruction for testing the user interface;
and calling a corresponding target test case set in the test case set according to the target test instruction and a preset execution script, running each test case in the target test case set, and testing the user interface to obtain a test result file.
2. The method of claim 1, further comprising:
setting common keywords by using the same page control in different test scenes and the same operation behavior of the page control;
correspondingly, the process of setting the test case in advance according to the page control of the user interface, the operation behavior of the page control and the execution result verification mode specifically includes:
and setting the test case in advance according to the positioning keywords of the page control of the user interface, the operation keywords of the operation behavior of the page control, the common keywords and the execution result verification keywords.
3. The method according to claim 1, wherein when the running each test case in the target set of test cases tests the user interface to obtain a test result file, the method further comprises:
and sending out corresponding prompt information when the abnormal test result exists in the test result file.
4. The method according to claim 3, wherein when the running each test case in the target set of test cases tests the user interface to obtain a test result file, further comprises:
and calling a screenshot script to perform screenshot on the current test operation when the abnormal test result exists in the test result file.
5. The method according to claim 4, wherein after the calling a target test case set corresponding to the test case set according to the target test instruction and a preset execution script, running each test case in the target test case set, testing the user interface, and obtaining a test result file, the method further comprises:
and carrying out statistical analysis on the test result file to obtain an analysis report.
6. The method of claim 5, wherein after said performing a statistical analysis on said test result file to obtain an analysis report, further comprising:
and sending the test result file and/or the analysis report to a target client.
7. The method according to any one of claims 1 to 6, wherein the process of calling a corresponding target test case set in the test case set according to the target test instruction and a preset execution script, running each test case in the target test case set, testing the user interface, and obtaining a test result file specifically includes:
calling a corresponding target test case set in the test case set according to the target test instruction and the preset execution script;
if the calling of the target test case set fails or the execution of the test cases in the target test case set fails, calling the target test case set again or executing the test cases again after a preset time length;
otherwise, testing the user interface by using each test case in the target test case set to obtain the test result file.
8. A user interface testing apparatus, comprising:
the test case setting module is used for setting a test case in advance according to a page control of a user interface, an operation behavior of the page control and an execution result verification mode;
the test case set setting module is used for setting a test case set corresponding to each test scene by using the test cases;
the instruction acquisition module is used for acquiring a target test instruction for testing the user interface;
and the operation test module is used for calling a corresponding target test case set in the test case set according to the target test instruction and a preset execution script, operating each test case in the target test case set, testing the user interface and obtaining a test result file.
9. A user interface testing apparatus, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the method of testing a user interface according to any one of claims 1 to 7 when executing said computer program.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the method for testing a user interface according to any one of claims 1 to 7.
CN201911347813.7A 2019-12-24 2019-12-24 User interface testing method, device, equipment and storage medium Pending CN111124919A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911347813.7A CN111124919A (en) 2019-12-24 2019-12-24 User interface testing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911347813.7A CN111124919A (en) 2019-12-24 2019-12-24 User interface testing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111124919A true CN111124919A (en) 2020-05-08

Family

ID=70500174

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911347813.7A Pending CN111124919A (en) 2019-12-24 2019-12-24 User interface testing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111124919A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112099696A (en) * 2020-09-27 2020-12-18 四川长虹电器股份有限公司 Password control positioning method
CN112468564A (en) * 2020-11-20 2021-03-09 浙江百应科技有限公司 Method for realizing automatic multi-machine parallel of terminal APP UI based on Apdium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112099696A (en) * 2020-09-27 2020-12-18 四川长虹电器股份有限公司 Password control positioning method
CN112468564A (en) * 2020-11-20 2021-03-09 浙江百应科技有限公司 Method for realizing automatic multi-machine parallel of terminal APP UI based on Apdium

Similar Documents

Publication Publication Date Title
CN105094783B (en) method and device for testing stability of android application
US9465718B2 (en) Filter generation for load testing managed environments
WO2020140820A1 (en) Software testing method, system, apparatus, device, medium, and computer program product
US20130311827A1 (en) METHOD and APPARATUS for automatic testing of automation software
US8731896B2 (en) Virtual testbed for system verification test
KR20080068385A (en) Program test system, method and computer readable medium on which program for executing the method is recorded
CN109302522B (en) Test method, test device, computer system, and computer medium
US20190188119A1 (en) System and a method for providing automated performance detection of application programming interfaces
CN111124919A (en) User interface testing method, device, equipment and storage medium
CN111949545A (en) Automatic testing method, system, server and storage medium
US8997048B1 (en) Method and apparatus for profiling a virtual machine
CN114546738A (en) Server general test method, system, terminal and storage medium
CN111897724B (en) Automatic testing method and device suitable for cloud platform
US10942837B2 (en) Analyzing time-series data in an automated application testing system
CN112199284A (en) Program automation testing method and corresponding device, equipment and medium
CN113590454A (en) Test method, test device, computer equipment and storage medium
us Saqib et al. Functionality, performance, and compatibility testing: A model based approach
CN112214407A (en) Data verification control and execution method and corresponding device, equipment and medium
CN111651366A (en) SDK test method, device, equipment and storage medium
CN111666200A (en) Testing method and terminal for time consumption of cold start of PC software
CN110795330A (en) Monkey pressure testing method and device
CN111400171B (en) Interface testing method, system and device and readable storage medium
US8898636B1 (en) Method and apparatus for testing an application running in a virtual machine
CN113986263A (en) Code automation test method, device, electronic equipment and storage medium
CN114116466A (en) Unit testing method, device and medium based on operation log

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination