CN113032243A - Intelligent testing method and system for GUI (graphical user interface) of mobile application program - Google Patents
Intelligent testing method and system for GUI (graphical user interface) of mobile application program Download PDFInfo
- Publication number
- CN113032243A CN113032243A CN202110116469.1A CN202110116469A CN113032243A CN 113032243 A CN113032243 A CN 113032243A CN 202110116469 A CN202110116469 A CN 202110116469A CN 113032243 A CN113032243 A CN 113032243A
- Authority
- CN
- China
- Prior art keywords
- gui
- graphic element
- screenshot
- mobile application
- interactive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention relates to a method and a system for intelligently testing a GUI (graphical user interface) of a mobile application program, wherein the method comprises the following steps: training a target detection model according to the GUI screenshot marked with the graphic element; training a deep reinforcement learning model according to a GUI (graphical user interface) interaction event sequence with a graphic element label and an interaction label printed at the same time; obtaining a GUI screenshot of the current state of the mobile application program; obtaining a GUI screenshot of the current state of the graphic element label according to the trained target detection model; detecting whether GUI defects exist in the graphic element labels; obtaining a GUI screenshot of the current state of the graphic element labeling and the interactive labeling according to the trained deep reinforcement learning model; according to the GUI screenshot of the current state of the graphic element labeling and the interactive label, applying an operation action to the operated graphic element, and updating the state of the mobile application program; and judging whether to stop testing according to the reward function in the depth reinforcement learning model. The invention improves the automation level and efficiency of the automated test of the GUI of the mobile application program.
Description
Technical Field
The invention relates to the technical field of mobile application testing, in particular to an intelligent testing method and system for a GUI (graphical user interface) of a mobile application program.
Background
Conventional approaches to Graphical User Interface (GUI) testing typically require a tester to manually compile a large number of test cases, involving many mechanical and repetitive tasks that are time consuming and laborious. Also, even small changes to the GUI can affect the entire test suite, potentially invalidating the original test cases. Currently, due to the large number of interactions on a GUI and the increasing complexity of mobile applications, it is virtually impossible to generate enough test cases to cover all GUI tests. The main challenges of traditional mobile application GUI testing are: fast upgrade iterations for mobile applications, fast updates of operating systems, and diversity in device and screen resolutions. Therefore, more efficient testing methods are needed to meet the testing requirements of mobile applications.
At present, researchers have begun to apply deep learning techniques to the field of GUI testing, liangchun et al propose huffman, an automated Android application testing method based on deep learning, which can learn test sequence generation from human generated interaction event sequences. Davida et al propose an Android application GUI automatic test method based on reinforcement learning, which selects an event and interprets the GUI of a tested application using a test generation algorithm based on Q-learning. Yavuz Koroglu et al propose farlie-Android, using reinforcement learning to generate tests that meet a given linear sequential logic specification. However, these methods do not identify and position GUI graphic elements, cannot operate the graphic elements as accurately as testers, may be misoperated to an invalid region, and cause test redundancy, and these methods do not detect and classify GUI defects, which affects the efficiency and quality of automatic generation of test cases.
Disclosure of Invention
Based on this, the invention aims to provide a mobile application program GUI intelligent test method and a system, which improve the test automation level and efficiency.
In order to achieve the purpose, the invention provides the following scheme:
a mobile application GUI smart testing method, the method comprising:
obtaining a plurality of GUI screenshots of a mobile application;
printing graphic element labels for each GUI screenshot through a label tool to obtain the GUI screenshot of each printed graphic element label; the graphical element tag comprises a bounding box location and a category of a graphical element on the GUI screenshot;
taking the GUI screenshot as input, and taking the GUI screenshot labeled with the graphic element as output to train a target detection algorithm, so as to obtain a trained target detection model;
acquiring a GUI interactive event sequence of manual interaction, wherein the GUI interactive event sequence comprises a series of GUI screenshots of continuous interaction;
inputting the GUI interaction event sequence into the trained target detection model to obtain a GUI interaction event sequence marked with a graphic element;
marking interactive labels for the GUI interactive event sequences marked with the graphic element labels to obtain the GUI interactive event sequences marked with the graphic element labels and the interactive labels; the interactive label comprises an interactive operation action and an operated graphic element;
inputting the GUI interactive event sequence marked with the graphic element label, and outputting a training deep reinforcement learning algorithm by using the GUI interactive event sequence marked with the graphic element label and the interactive label to obtain a trained deep reinforcement learning model;
obtaining a GUI screenshot of the current state of the mobile application program;
inputting the GUI screenshot of the current state into the trained target detection model to obtain the GUI screenshot of the current state of the graphic element label;
detecting whether a GUI defect exists according to the GUI screenshot of the current state of the graphic element labeling, and if so, recording the GUI defect;
inputting the GUI screenshot of the current state of the graphic element labeling into the trained deep reinforcement learning model to obtain the GUI screenshot of the current state of the graphic element labeling and the interactive labeling;
according to the GUI screenshot of the current state of the graphic element labeling and the interactive label, the operation action is applied to the operated graphic element, and the state of the mobile application program is updated;
judging whether the reward function in the depth reinforcement learning model reaches a stop condition or not;
and if not, returning to the step of acquiring the GUI screenshot of the current state of the mobile application program.
Optionally, the determining whether the reward function in the deep reinforcement learning model reaches the stop condition specifically includes:
and if the change of the reward function in the deep reinforcement learning model is smaller than a set threshold value within set time, stopping the intelligent test of the GUI of the mobile application program.
Optionally, the reward function is positively correlated to the mobile application GUI status and the cumulative total of GUI defects; wherein the accumulated GUI states of the mobile application in the reward function are not repeated, and the GUI defects are not repeated.
Optionally, the method further comprises:
detecting whether GUI defects exist according to the GUI screenshot of the current state of the graphic element labeling;
and if the GUI defect exists, recording the json file of the GUI defect.
Optionally, before the obtaining of the GUI screenshot of the current state of the mobile application, the method further includes:
and performing connection interaction with the mobile application program through an automatic testing tool.
Optionally, the operation graphic element includes: text buttons, icons, radio buttons, check boxes, sliders, switches, page indicators, input boxes, and GUI defects; the GUI defects include: load errors, messy code, blank pages, and error tips.
Optionally, the bounding box location comprises coordinates, width and height of the bounding box.
Optionally, the operation action comprises clicking, double clicking, long pressing, sliding and inputting; the sliding includes sliding left, right, up, and down.
The invention also discloses a mobile application program GUI intelligent test system, which comprises:
the multiple GUI screenshot obtaining module is used for obtaining multiple GUI screenshots of the mobile application program;
the first graphical element labeling module is used for labeling the graphical element labels for the GUI screenshots through a labeling tool to obtain the GUI screenshots labeled with the graphical element labels; the graphical element tag comprises a bounding box location and a category of a graphical element on the GUI screenshot;
the target detection model training module is used for taking the GUI screenshot as input and taking the GUI screenshot labeled with the graphic element as output to train a target detection algorithm so as to obtain a trained target detection model;
the GUI interaction event sequence acquisition module is used for acquiring a GUI interaction event sequence of manual interaction, and the GUI interaction event sequence comprises a series of GUI screenshots of continuous interaction;
the second graphical element labeling module is used for inputting the GUI interaction event sequence into the trained target detection model to obtain the GUI interaction event sequence labeled by the graphical elements;
the interactive labeling module is used for labeling the interactive labels for the GUI interactive event sequences labeled by the graphic element labels to obtain the GUI interactive event sequences labeled by the graphic element labels and the interactive labels; the interactive label comprises an interactive operation action and an operated graphic element;
the deep reinforcement learning model training module is used for inputting the GUI interaction event sequence marked with the graphic element labels and outputting a training deep reinforcement learning algorithm by the GUI interaction event sequence marked with the graphic element labels and the interaction labels to obtain a trained deep reinforcement learning model;
the GUI screenshot acquiring module of the current state is used for acquiring the GUI screenshot of the current state of the mobile application program;
the target detection model application module is used for inputting the GUI screenshot of the current state into the trained target detection model to obtain the GUI screenshot of the current state of the graphical element label;
the GUI screenshot detection module is used for detecting whether GUI defects exist according to the GUI screenshot of the current state of the graphic element labeling, and if so, recording the GUI defects;
the deep reinforcement learning model application module is used for inputting the GUI screenshot of the current state of the graphic element labeling into the trained deep reinforcement learning model to obtain the GUI screenshot of the current state of the graphic element labeling and the interactive labeling;
the mobile application program updating module is used for applying the operation action to the operated graphic element according to the GUI screenshot of the current state of the graphic element labeling and the interactive label and updating the state of the mobile application program;
the judging module is used for judging whether the reward function in the depth reinforcement learning model reaches a stopping condition or not;
and the return module is used for returning to the GUI screenshot obtaining module in the current state if the reward function does not reach the stop condition.
Optionally, the method specifically includes:
and the judging unit is used for stopping the intelligent test of the GUI of the mobile application program if the change of the reward function in the deep reinforcement learning model is less than a set threshold value within set time.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the method comprises the steps of automatically positioning and classifying the boundary frames of graphic elements on the GUI screenshot of the mobile application program through a target detection model, detecting GUI defects, inputting the GUI screenshot marked with the boundary frame positioning and classification into a deep reinforcement learning model, automatically acquiring interactive operation actions and operated graphic elements, automatically applying the operation actions to the operated graphic elements, updating the state of the mobile application program, and further detecting the next state. The graphic elements on the GUI are identified through the target detection model, and the state of the mobile application program is updated through the deep reinforcement learning model, so that the automation level and efficiency of the GUI automation test of the mobile application program are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flowchart illustrating a method for testing a GUI of a mobile application according to the present invention;
FIG. 2 is a schematic diagram of a mobile application GUI testing system according to the present invention;
FIG. 3 is a flowchart illustrating a status update process of a mobile application according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide an intelligent testing method and system for a GUI (graphical user interface) of a mobile application program, which improve the automation level and efficiency of testing.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a schematic flow chart of an intelligent testing method for a GUI of a mobile application according to the present invention, and as shown in fig. 1, the intelligent testing method for the GUI of the mobile application includes the following steps:
step 101: a plurality of GUI screenshots of a mobile application are obtained. Obtaining the plurality of GUI screenshots of the mobile application specifically includes obtaining the plurality of GUI screenshots of the mobile application.
Step 102: printing graphic element labels for each GUI screenshot through a label tool to obtain the GUI screenshot of each printed graphic element label; the graphic element tag includes a bounding box location and a category of a graphic element on the GUI screenshot.
The categories of graphical elements include text buttons, icons, radio buttons, check boxes, sliders, switches, page indicators, input boxes, and GUI defects including loading errors, messy codes, blank pages, and error tips, and the bounding box positioning includes coordinates, width, and height of the bounding box.
Step 103: and taking the GUI screenshot as input, and taking the GUI screenshot labeled with the graphic element as output to train the target detection algorithm, so as to obtain a trained target detection model.
And the GUI screenshots and the graphic element labels which are respectively marked with graphic element labels and correspond to the plurality of GUI screenshots form a training set of the target detection model in pairs, and the training is stopped when the result precision tends to be stable, namely the training of the target detection model is finished.
Step 104: a sequence of manually-interacted GUI interaction events is obtained, the sequence of GUI interaction events comprising a series of GUI screenshots of successive interactions.
Step 105: and inputting the GUI interaction event sequence into the trained target detection model to obtain the GUI interaction event sequence labeled by the graphic element.
Step 106: marking interactive labels for the GUI interactive event sequences marked with the graphic element labels to obtain the GUI interactive event sequences marked with the graphic element labels and the interactive labels; the interactive label comprises interactive operation actions and operated graphic elements.
The operated graphic element is an operation object corresponding to the operation action; the operation actions comprise clicking, double clicking, long pressing, sliding and inputting; the sliding includes sliding left, right, up, and down.
Step 107: and taking the GUI interactive event sequence marked with the graphic element labels as input, and taking the GUI interactive event sequence marked with the graphic element labels and the interactive labels as output to train the deep reinforcement learning algorithm, thereby obtaining a trained deep reinforcement learning model.
And the GUI interactive event sequences and interactive labels which are respectively marked with the graphic element labels and the interactive labels and correspond to the GUI interactive event sequences form a training set of the deep reinforcement learning model in pairs, and when the result precision tends to be stable, the training is stopped, namely the training of the deep reinforcement learning model is completed.
Step 108: and acquiring a GUI screenshot of the current state of the mobile application program.
Step 109: and inputting the GUI screenshot of the current state into the trained target detection model to obtain the GUI screenshot of the current state of the graphic element label.
Step 110: and detecting whether GUI defects exist according to the GUI screenshot of the current state of the graphic element labeling, and if so, recording the GUI defects.
Recording the GUI defect is specifically a json file recording the GUI defect.
Step 111: and inputting the GUI screenshot of the current state of the graphic element labeling into the trained deep reinforcement learning model to obtain the GUI screenshot of the current state of the graphic element labeling and the interactive labeling.
Step 112: and according to the GUI screenshot of the current state of the graphic element labeling and the interactive label, applying the operation action to the operated graphic element and updating the state of the mobile application program.
Step 113: and judging whether the reward function in the deep reinforcement learning model reaches a stop condition, and stopping the training of the deep reinforcement learning model if the change of the reward function in the deep reinforcement learning model is smaller than a set threshold value within set time, namely the reward function tends to be stable within the set time.
If not, return to step 108.
If yes, go to step 114.
Step 114: the test is stopped.
The state update of the mobile application program corresponds to the GUI state update of the mobile application program, the GUI state update forms a GUI interaction event sequence, the GUI interaction event sequence is a GUI state transition directed graph, a node is a GUI screenshot of the current state, a directed edge is an operation action and an operated graphic element, the generated interaction event record is a json file, and a record object comprises a graphic element category, a bounding box location, the operation action, and position data of the operation start and end. Whenever the GUI interaction event sequence is updated to a certain state, screenshot saving is carried out on the GUI, graphic elements are marked, and GUI defects are detected and recorded.
According to the target detection model, GUI defect types in the GUI screenshot are identified and a boundary box is generated, the types of the GUI defects comprise loading errors, messy codes, blank pages and error prompts, the GUI defect record is a json file, and the record object comprises the GUI defect types, the boundary box, the GUI defect screenshot, the last GUI screenshot, the last operation action and graphical elements of the operation.
The reward function is positively correlated to the mobile application GUI state and a cumulative total of GUI defects; wherein the accumulated GUI states of the mobile application in the reward function are not repeated, and the GUI defects are not repeated.
Wherein the method further comprises before step 108:
and performing connection interaction with the mobile application program through an automatic testing tool.
The operation graphic element includes: text buttons, icons, radio buttons, check boxes, sliders, switches, page indicators, input boxes, and GUI defects.
The bounding box location includes coordinates, width, and height of the bounding box.
The intelligent testing method for the GUI of the mobile application program solves the problem that the existing intelligent testing for the GUI cannot be solved. The target detection algorithm is applied to the GUI of the mobile application program, graphic elements and GUI defects in the GUI are accurately identified and classified, detection objects are made clear, operation to invalid areas is avoided, and then the deep reinforcement learning algorithm is applied to conversion updating of the GUI, so that a computer can accurately operate the graphic elements on the GUI and detect the GUI defects like testers, and meanwhile, testing blind areas of the testers are avoided.
As shown in fig. 2, the present invention also discloses an intelligent testing system for GUI of mobile application, the system includes:
a multiple GUI screenshot obtaining module 201, configured to obtain multiple GUI screenshots of multiple mobile applications.
A first graphic element labeling module 202, configured to label each GUI screenshot with a graphic element label through a labeling tool, to obtain each GUI screenshot labeled with a graphic element label; the graphic element tag includes a bounding box location and a category of a graphic element on the GUI screenshot. The process of labeling the graphical elements is to manually label each GUI screenshot with a labeling tool.
And the target detection model training module 203 is used for training a target detection algorithm by taking the GUI screenshot as input and the GUI screenshot labeled by the graphic element as output to obtain a trained target detection model.
And a GUI interaction event sequence acquiring module 204, configured to acquire a manually-interacted GUI interaction event sequence, where the GUI interaction event sequence includes a series of GUI screenshots of consecutive interactions.
And a second graphical element tagging module 205, configured to input the GUI interaction event sequence into the trained target detection model, and automatically obtain a graphical element tagged GUI interaction event sequence.
An interactive labeling module 206, configured to label an interactive label for each graphical element labeled GUI interactive event sequence, to obtain each graphical element labeled GUI interactive event sequence and each interactive label labeled GUI interactive event sequence; the interactive label comprises an interactive operation action and an operated graphic element; and the operated graphic element is an operation object corresponding to the operation action.
And the deep reinforcement learning model training module 207 is used for inputting the GUI interaction event sequence labeled by the graphic element, outputting a training deep reinforcement learning algorithm by the GUI interaction event sequence labeled by the graphic element and the interaction label, and obtaining a trained deep reinforcement learning model.
And a GUI screenshot obtaining module 208 for obtaining a GUI screenshot of the current state of the mobile application.
And the target detection model application module 209 is configured to input the GUI screenshot of the current state into the trained target detection model, so as to obtain the GUI screenshot of the current state labeled with the graphical element.
And a GUI screenshot detecting module 210, configured to detect whether a GUI defect exists according to the GUI screenshot of the current status of the graphic element tagging, and if so, record the GUI defect.
And the deep reinforcement learning model application module 211 is configured to input the GUI screenshot of the current state of the graphic element tagging into the trained deep reinforcement learning model, so as to obtain the GUI screenshot of the current state of the graphic element tagging and the interactive tagging.
And the mobile application program updating module 212 is used for applying the operation action to the operated graphic element according to the GUI screenshot of the current state of the graphic element labeling and the interactive labeling and updating the state of the mobile application program.
And the judging module 213 is used for judging whether the reward function in the deep reinforcement learning model reaches the stop condition.
And a returning module 214, configured to return to the GUI screenshot obtaining module in the current state if the reward function does not meet the stop condition.
The determining module 213 specifically includes:
and if the variation of the reward function in the deep reinforcement learning model is smaller than a set threshold value within a set time, namely the reward function tends to be stable within the set time, stopping the intelligent test of the GUI of the mobile application program.
The reward function is positively correlated to the mobile application GUI state and a cumulative total of GUI defects; wherein the accumulated GUI states of the mobile application in the reward function are not repeated, and the GUI defects are not repeated.
Examples
The mobile application and the GUI event sequence screenshots thereof in this embodiment are from Google Play, hua is an application market and a Rico dataset, where Google Play and hua are online application stores developed by the application market for Google and hua for Android devices, respectively, and the Rico dataset is a crowdsourcing dataset containing the mobile application GUI screenshots and the interaction data thereof.
In the embodiment, the target detection algorithm adopts a Mask R-CNN algorithm, and the Mask R-CNN algorithm has no requirement on the size of an input picture and can meet various devices and resolutions.
And marking graphic element labels on 2100 GUI screenshots by a labelme tool, wherein the label content comprises graphic element boundary box positioning and classification, inputting the GUI screenshots marked with the graphic element labels and the graphic element labels in pair into a Mask R-CNN algorithm, wherein a backbone network is a ResNet-50 network, an intersection-parallel ratio (IOU) threshold value is set to be 0.7, training 100 rounds, and obtaining a trained Mask R-CNN model, namely a target detection model.
In this embodiment, the DQN algorithm is used as the deep reinforcement learning algorithm. And inputting the GUI interaction event sequence into a Mask R-CNN model, automatically classifying the graphic elements and positioning the boundary box, wherein each interaction event corresponds to a json file, the content comprises the interactive graphic element category and the coordinate of the center point (x, y) of the boundary box, and the coordinate of mouse clicking. And inputting the GUI interaction event sequence and the json file into the DQN algorithm in pairs, training 100 rounds, and obtaining a trained DQN model, namely the deep reinforcement learning model.
In this embodiment, as shown in fig. 3, it is a flowchart for updating the mobile application program status according to the present invention:
step one, connecting a mobile application program automation testing tool (Apium) and A Virtual Device (AVD) through a Pycharm, wherein the AVD takes android 7.0 as an example. The application is started and connected in the AVD, and the application home page is entered.
And step two, screenshot is conducted on the current page of the application program through a driver of python, get _ screen _ as _ file function, the screenshot is input into a Mask R-CNN model which is trained to conduct graphic element boundary box positioning and classification, and a GUI screenshot with the graphic element boundary box and the type marked is automatically obtained.
And step three, detecting whether GUI defects exist or not, and if so, generating a json file recorded with the GUI defects.
Defining operation actions according to the graphic element classification in the mobile application program GUI, wherein the operation actions comprise clicking, double clicking, long pressing, sliding and inputting; sliding includes sliding left, right, up, and down.
And fifthly, inputting the GUI screenshots of the marked graphic element boundary boxes and the marked types into the trained DQN model, setting a reward function as the number of the new states of the GUI of the mobile application program and the accumulated total number of the GUI defects, automatically acquiring the next operation action and the operated graphic elements, automatically applying the operation action to the operated graphic elements, converting the GUI of the application program into the new states, increasing the reward function, saving the GUI screenshots of the new states, and generating a json file of the interaction event record.
And step six, repeating the step two to the step five, wherein the goal is to maximize the reward function, namely, continuously converting the GUI into a new state, continuously detecting the GUI defects and generating a json file recorded by the GUI defects.
In summary, in the present embodiment, the MaskR-CNN model is applied to identifying and positioning GUI graphic elements, and the DQN model is applied to learn and explore the path policy, so as to effectively convert the GUI into a new state and mine GUI defects.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.
Claims (10)
1. A mobile application GUI intelligent test method is characterized by comprising the following steps:
obtaining a plurality of GUI screenshots of a mobile application;
printing graphic element labels for each GUI screenshot through a label tool to obtain the GUI screenshot of each printed graphic element label; the graphical element tag comprises a bounding box location and a category of a graphical element on the GUI screenshot;
taking the GUI screenshot as input, and taking the GUI screenshot labeled with the graphic element as output to train a target detection algorithm, so as to obtain a trained target detection model;
acquiring a GUI interactive event sequence of manual interaction, wherein the GUI interactive event sequence comprises a series of GUI screenshots of continuous interaction;
inputting the GUI interaction event sequence into the trained target detection model to obtain a GUI interaction event sequence marked with a graphic element;
marking interactive labels for the GUI interactive event sequences marked with the graphic element labels to obtain the GUI interactive event sequences marked with the graphic element labels and the interactive labels; the interactive label comprises an interactive operation action and an operated graphic element;
inputting the GUI interactive event sequence marked with the graphic element label, and outputting a training deep reinforcement learning algorithm by using the GUI interactive event sequence marked with the graphic element label and the interactive label to obtain a trained deep reinforcement learning model;
obtaining a GUI screenshot of the current state of the mobile application program;
inputting the GUI screenshot of the current state into the trained target detection model to obtain the GUI screenshot of the current state of the graphic element label;
detecting whether a GUI defect exists according to the GUI screenshot of the current state of the graphic element labeling, and if so, recording the GUI defect;
inputting the GUI screenshot of the current state of the graphic element labeling into the trained deep reinforcement learning model to obtain the GUI screenshot of the current state of the graphic element labeling and the interactive labeling;
according to the GUI screenshot of the current state of the graphic element labeling and the interactive label, the operation action is applied to the operated graphic element, and the state of the mobile application program is updated;
judging whether the reward function in the depth reinforcement learning model reaches a stop condition or not;
and if not, returning to the step of acquiring the GUI screenshot of the current state of the mobile application program.
2. The intelligent testing method for the GUI of the mobile application according to claim 1, wherein determining whether the reward function of the deep reinforcement learning model reaches the stop condition specifically comprises:
and if the change of the reward function in the deep reinforcement learning model is smaller than a set threshold value within set time, stopping the intelligent test of the GUI of the mobile application program.
3. The method of claim 2, wherein the reward function is positively correlated to the cumulative total of the mobile application GUI status and GUI defects; wherein the accumulated GUI states of the mobile application in the reward function are not repeated, and the GUI defects are not repeated.
4. The mobile application GUI intelligent test method of claim 1, further comprising:
detecting whether GUI defects exist according to the GUI screenshot of the current state of the graphic element labeling;
and if the GUI defect exists, recording the json file of the GUI defect.
5. The method for intelligently testing the GUI of a mobile application according to claim 1, wherein before the obtaining of the GUI screenshot of the current state of the mobile application, the method further comprises:
and performing connection interaction with the mobile application program through an automatic testing tool.
6. The mobile application GUI intelligent test method of claim 1, wherein the manipulating the graphical element comprises: text buttons, icons, radio buttons, check boxes, sliders, switches, page indicators, input boxes, and GUI defects; the GUI defects include: load errors, messy code, blank pages, and error tips.
7. The mobile application GUI smart test method of claim 1, wherein the bounding box position comprises coordinates, width and height of the bounding box.
8. The mobile application GUI smart test method of claim 1, wherein said operational actions comprise single click, double click, long press, swipe and input; the sliding includes sliding left, right, up, and down.
9. A mobile application GUI intelligent test system, the system comprising:
the multiple GUI screenshot obtaining module is used for obtaining multiple GUI screenshots of the mobile application program;
the first graphical element labeling module is used for labeling the graphical element labels for the GUI screenshots through a labeling tool to obtain the GUI screenshots labeled with the graphical element labels; the graphical element tag comprises a bounding box location and a category of a graphical element on the GUI screenshot;
the target detection model training module is used for taking the GUI screenshot as input and taking the GUI screenshot labeled with the graphic element as output to train a target detection algorithm so as to obtain a trained target detection model;
the GUI interaction event sequence acquisition module is used for acquiring a GUI interaction event sequence of manual interaction, and the GUI interaction event sequence comprises a series of GUI screenshots of continuous interaction;
the second graphical element labeling module is used for inputting the GUI interaction event sequence into the trained target detection model to obtain the GUI interaction event sequence labeled by the graphical elements;
the interactive labeling module is used for labeling the interactive labels for the GUI interactive event sequences labeled by the graphic element labels to obtain the GUI interactive event sequences labeled by the graphic element labels and the interactive labels; the interactive label comprises an interactive operation action and an operated graphic element;
the deep reinforcement learning model training module is used for inputting the GUI interaction event sequence marked with the graphic element labels and outputting a training deep reinforcement learning algorithm by the GUI interaction event sequence marked with the graphic element labels and the interaction labels to obtain a trained deep reinforcement learning model;
the GUI screenshot acquiring module of the current state is used for acquiring the GUI screenshot of the current state of the mobile application program;
the target detection model application module is used for inputting the GUI screenshot of the current state into the trained target detection model to obtain the GUI screenshot of the current state of the graphical element label;
the GUI screenshot detection module is used for detecting whether GUI defects exist according to the GUI screenshot of the current state of the graphic element labeling, and if so, recording the GUI defects;
the deep reinforcement learning model application module is used for inputting the GUI screenshot of the current state of the graphic element labeling into the trained deep reinforcement learning model to obtain the GUI screenshot of the current state of the graphic element labeling and the interactive labeling;
the mobile application program updating module is used for applying the operation action to the operated graphic element according to the GUI screenshot of the current state of the graphic element labeling and the interactive label and updating the state of the mobile application program;
the judging module is used for judging whether the reward function in the depth reinforcement learning model reaches a stopping condition or not;
and the return module is used for returning to the GUI screenshot obtaining module in the current state if the reward function does not reach the stop condition.
10. The system according to claim 9, wherein the determining module specifically comprises:
and the judging unit is used for stopping the intelligent test of the GUI of the mobile application program if the change of the reward function in the deep reinforcement learning model is less than a set threshold value within set time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110116469.1A CN113032243B (en) | 2021-01-28 | 2021-01-28 | Intelligent testing method and system for GUI (graphical user interface) of mobile application program |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110116469.1A CN113032243B (en) | 2021-01-28 | 2021-01-28 | Intelligent testing method and system for GUI (graphical user interface) of mobile application program |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113032243A true CN113032243A (en) | 2021-06-25 |
CN113032243B CN113032243B (en) | 2021-12-17 |
Family
ID=76459411
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110116469.1A Active CN113032243B (en) | 2021-01-28 | 2021-01-28 | Intelligent testing method and system for GUI (graphical user interface) of mobile application program |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113032243B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115687115A (en) * | 2022-10-31 | 2023-02-03 | 上海计算机软件技术开发中心 | Automatic testing method and system for mobile application program |
CN118227493A (en) * | 2024-04-01 | 2024-06-21 | 四川大学 | GUI image recognition automatic test method based on deep learning |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103425472A (en) * | 2012-05-23 | 2013-12-04 | 上海计算机软件技术开发中心 | System for dynamically creating software testing environments on basis of cloud computing and method for implementing system |
CN109308250A (en) * | 2017-07-26 | 2019-02-05 | 上海富瀚微电子股份有限公司 | A kind of GUI automated testing method and system |
CN109408384A (en) * | 2018-10-16 | 2019-03-01 | 网易(杭州)网络有限公司 | Test method, device, processor and the electronic device of software application |
US20200019488A1 (en) * | 2018-07-12 | 2020-01-16 | Sap Se | Application Test Automate Generation Using Natural Language Processing and Machine Learning |
CN111797007A (en) * | 2020-06-18 | 2020-10-20 | 中国科学院软件研究所 | Automatic detection and positioning method for application program user interface defects and electronic device |
-
2021
- 2021-01-28 CN CN202110116469.1A patent/CN113032243B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103425472A (en) * | 2012-05-23 | 2013-12-04 | 上海计算机软件技术开发中心 | System for dynamically creating software testing environments on basis of cloud computing and method for implementing system |
CN109308250A (en) * | 2017-07-26 | 2019-02-05 | 上海富瀚微电子股份有限公司 | A kind of GUI automated testing method and system |
US20200019488A1 (en) * | 2018-07-12 | 2020-01-16 | Sap Se | Application Test Automate Generation Using Natural Language Processing and Machine Learning |
CN109408384A (en) * | 2018-10-16 | 2019-03-01 | 网易(杭州)网络有限公司 | Test method, device, processor and the electronic device of software application |
CN111797007A (en) * | 2020-06-18 | 2020-10-20 | 中国科学院软件研究所 | Automatic detection and positioning method for application program user interface defects and electronic device |
Non-Patent Citations (2)
Title |
---|
D. ZHAO 等: "Seenomaly: Vision-Based Linting of GUI Animation Effects Against Design-Don"t Guidelines", 《2020 IEEE/ACM 42ND INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE)》 * |
徐时怀 等: "基于云平台和深度学习的软件GUI自动测试系统", 《中国计量大学学报》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115687115A (en) * | 2022-10-31 | 2023-02-03 | 上海计算机软件技术开发中心 | Automatic testing method and system for mobile application program |
CN115687115B (en) * | 2022-10-31 | 2023-07-28 | 上海计算机软件技术开发中心 | Automatic testing method and system for mobile application program |
CN118227493A (en) * | 2024-04-01 | 2024-06-21 | 四川大学 | GUI image recognition automatic test method based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN113032243B (en) | 2021-12-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7398068B2 (en) | software testing | |
CN104461855B (en) | A kind of Web automated testing method, system and device | |
US20210326245A1 (en) | AI Software Testing System and Method | |
CN108763068B (en) | Automatic testing method and terminal based on machine learning | |
CN109408384B (en) | Software application testing method and device, processor and electronic device | |
CN113391871B (en) | RPA element intelligent fusion picking method and system | |
US20200349466A1 (en) | Providing performance views associated with performance of a machine learning system | |
CN113032243B (en) | Intelligent testing method and system for GUI (graphical user interface) of mobile application program | |
CN105955889A (en) | Graphic interface automated test method | |
CN113255614A (en) | RPA flow automatic generation method and system based on video analysis | |
CN101025686A (en) | Automation test system and test script generating and operating method | |
CN112749081A (en) | User interface testing method and related device | |
CN103814373A (en) | Automatic classification adjustment of recorded actions for automation script | |
CN117421217B (en) | Automatic software function test method, system, terminal and medium | |
CN112258161A (en) | Intelligent software testing system and testing method based on robot | |
CN116127203A (en) | RPA service component recommendation method and system combining page information | |
CN114416516A (en) | Test case and test script generation method, system and medium based on screenshot | |
Sun et al. | Ui components recognition system based on image understanding | |
CN112527676A (en) | Model automation test method, device and storage medium | |
CN112416788A (en) | Hierarchical standard Web application UI automatic test method | |
CN115562656A (en) | Page generation method and device, storage medium and computer equipment | |
CN112817863B (en) | AI-aided automatic test method and system based on AI deep learning | |
CN117573006B (en) | Method and system for batch pick-up of RPA screen interface elements | |
CN113703637A (en) | Inspection task coding method and device, electronic equipment and computer storage medium | |
CN115905016A (en) | BIOS Setup search function test method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |