CN113704125A - Black box automatic testing method based on graphical user interface - Google Patents

Black box automatic testing method based on graphical user interface Download PDF

Info

Publication number
CN113704125A
CN113704125A CN202111021509.0A CN202111021509A CN113704125A CN 113704125 A CN113704125 A CN 113704125A CN 202111021509 A CN202111021509 A CN 202111021509A CN 113704125 A CN113704125 A CN 113704125A
Authority
CN
China
Prior art keywords
gui
software
control
tested
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111021509.0A
Other languages
Chinese (zh)
Inventor
余林玲
高建丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Leading Technology Co Ltd
Original Assignee
Nanjing Leading Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Leading Technology Co Ltd filed Critical Nanjing Leading Technology Co Ltd
Priority to CN202111021509.0A priority Critical patent/CN113704125A/en
Publication of CN113704125A publication Critical patent/CN113704125A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3696Methods or tools to render software testable

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the application provides a black box automatic testing method and device based on a graphical user interface, an electronic device and a storage medium, event information and control information of software to be tested operated by a tester are obtained, a testing script is generated according to the event information, the control information and preset assertion, the testing script is operated on a driving engine, automatic GUI testing is carried out on the software to be tested, a testing report is generated, GUI automatic testing of the software to be tested is achieved, GUI testing efficiency and accuracy of the software to be tested are improved, improvement of performance of the software to be tested is facilitated, and use experience of a user is improved.

Description

Black box automatic testing method based on graphical user interface
Technical Field
The embodiment of the application relates to a software automatic testing method, belongs to the technical field of computer software testing, and particularly relates to a black box automatic testing method and device based on a graphical user interface, electronic equipment and a storage medium.
Background
Software testing is an important means for ensuring software quality and an important link in the software development process. The testing of a Graphical User Interface (GUI) is a key link of the modern software testing, and the quality of a GUI system is the key of the quality improvement and the cost reduction of the whole software product. As software technologies based on agile development modes are continuously developed and advanced, software testing faces higher challenges; the method not only needs to realize higher-degree automation and reduce manual participation, but also puts higher requirements on the efficiency and the performance of the automatic test. Due to the uniqueness of Windows GUI application software, the testing method of the original traditional software is not suitable for testing the GUI application software, and the GUI manual testing can not meet the testing requirements of the current GUI application software.
In the prior art, a capture/playback (C/P) mechanism is generally adopted for automated testing of GUI application software, however, the C/P mechanism does not provide good support for automated testing of GUI application software. The execution information of the tested software can be captured only passively, but can not interact with the tested software, and the execution information of the tested software can be captured selectively.
Therefore, effective automated testing of GUI application software becomes an urgent technical problem to be solved in the existing software testing technology.
Disclosure of Invention
The embodiment of the application provides a black box automatic testing method and device based on a graphical user interface, electronic equipment and a storage medium, and realizes black box automatic testing of GUI application software.
In a first aspect, an embodiment of the present application provides a black box automated testing method based on a graphical user interface, including:
acquiring event information and control information of a tester operating software to be tested;
generating a test script according to the event information, the control information and a preset assertion;
and running the test script on a driving engine, carrying out GUI test on the software to be tested, and generating a test report.
In a second aspect, an embodiment of the present application provides a black box automated testing apparatus based on a graphical user interface, including:
the recording module is used for acquiring event information and control information of the software to be tested operated by a tester; generating a test script according to the event information, the control information and a preset assertion;
and the playback module is used for running the test script on a driving engine, carrying out GUI test on the software to be tested and generating a test report.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the method for automatically testing a black box based on a graphical user interface according to the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for black box automated testing based on a graphical user interface as described in the first aspect above.
According to the black box automatic testing method and device based on the graphical user interface, the electronic equipment and the storage medium, event information and control information of software to be tested operated by a tester are obtained, the test script is generated according to the event information, the control information and preset assertion, the test script is operated on the driving engine, GUI (graphical user interface) testing is performed on the software to be tested, and the test report is generated.
Drawings
Fig. 1 is a schematic flowchart of a black box automated testing method based on a graphical user interface according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an assertion interface provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of a control identification method according to a second embodiment of the present application;
fig. 4 is a schematic flowchart of a dynamic traversal algorithm provided in the second embodiment of the present application;
fig. 5 is a schematic view of a traversal process provided in the second embodiment of the present application;
fig. 6 is a schematic view of a GUI state transition determination process provided in the second embodiment of the present application;
FIG. 7 is a flowchart illustrating another dynamic traversal algorithm provided in the second embodiment of the present application;
fig. 8 is a flowchart illustrating a control identification method according to a third embodiment of the present application;
fig. 9 is a logic diagram for acquiring a GUI screenshot based on a combination of an automatic screenshot and a manual screenshot provided in the third embodiment of the present application;
fig. 10 is a schematic structural diagram of a black box automated testing apparatus based on a graphical user interface according to a fourth embodiment of the present application;
fig. 11 is a schematic structural diagram of another black box automated testing apparatus based on a graphical user interface according to a fourth embodiment of the present application;
FIG. 12 is a schematic diagram of a design architecture of a script management module according to a fourth embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
First, the concept used in the present embodiment is explained as follows:
(1) window state
It is an abstract concept describing the internal situation of a certain window of application software at a certain time, and a certain window state alone can be represented by a { O, P, V } triple, where O, i.e. Objects, represents the set of all GUI elements in the window state, and can be represented as O ═ { O ═ O {l,.....,on}; p, Properties, represents all the attributes of a single GUI element in O, and may be expressed as P ═ { Pl,.....,pm}; v, i.e. Values, all attribute Values of each GUI element in the set O may be expressed as V { (V)1l,.....,v1m),......,(vnl,.....,vnm)}。
(2) GUI status
It indicates aThe abstract concept of the total condition of the GUI of the application program at each moment is a state identifier reflecting the total condition of the graphical user interface of the application software at a certain moment of the application software to be tested, and a single GUI state is represented by { w }tW binary representation, where WtRepresents the window state of the top active window in the GUI state, and W represents a set of windows that includes the window states of all windows in the GUI state except the top active window.
(3) GUI state transitions
Window application GUI has its GUI state at a certain time being stable and static, i.e. { wtW is uniquely determined, however, when the application responds to and executes the user's action event, the continuous running process of the entire application corresponds to the process of the GUI state transition, i.e., { W }tW is changed. Here will be wtAnd W is called GUI state transition, a common transition process tiCan use (S)i,ei,S′i) And (4) formalizing the representation. Wherein S isiIndicates GUI status, S 'before transition'iRepresenting the post-migration GUI State, eiIndicating a migration event.
(4) Migration event
At a fixed time, a user event causing a GUI state transition; it includes reachable events and terminating events. The reachable event refers to a user event for opening a new window at a certain time. A termination event: refers to a user event for closing the current top-level active window at a certain time.
(5) State sequence table and event sequence table
A state sequence table: for storing all GUI states;
an event sequence table: for storing all user events.
(6) Migration state list and migration event list
List of migration states: the GUI state storage module is used for storing the GUI state which is migrated in the traversal process;
migration event list: the migration event storage system is used for storing migration events in the traversal process.
The main ideas of the technical scheme are as follows: based on the technical problems in the prior art, the embodiment of the application provides an automatic black box testing scheme based on Windows GUI application software, which is characterized in that on the basis of a 'recording-playback' regression test, a test step is generated by recording the operation of a tester on software to be tested in the recording process, a test script is automatically generated according to the test step and a preset assertion (judgment statement), and the automatic black box testing on the GUI of the software to be tested is realized by driving an engine to run the test script in the playback process. In addition, in the embodiment, on the basis of improving the existing control identification algorithm and traversal algorithm, the control object information is acquired, so that the control identification accuracy and identification efficiency are improved, the time for generating the test script is shortened, the test efficiency is improved, the usability and reliability of the test script are improved, and the test accuracy is improved.
Example one
Fig. 1 is a schematic flowchart of a black box automated testing method based on a graphical user interface according to an embodiment of the present application, where the method according to this embodiment may be executed by a black box automated testing apparatus based on a graphical user interface according to an embodiment of the present application, and the apparatus may be implemented by software and/or hardware, and may be integrated in an electronic device such as a computer, which is installed with software to be tested. As shown in fig. 1, the method for automatically testing a black box based on a graphical user interface of the present embodiment includes:
s101, obtaining event information and control information of a tester operating the software to be tested.
In this embodiment, the software to be tested refers to GUI application software, and correspondingly, the operation of the tester also refers to the operation of the tester on the GUI of the test software. In order to realize the black box automatic test of the GUI of the software to be tested, when the test is initiated, a tester needs to design a test scheme for testing the software to be tested, and the test scheme comprises specific test steps.
Correspondingly, in this embodiment, when the tester tests the software to be tested according to the test steps in the preset test scheme, the action record of the tester operating the software to be tested is recorded, where the action record includes event information and control information. The event information refers to information of a tester operating an input device such as a mouse, a keyboard, a touch screen and the like, such as clicking and inputting characters and the like. The control information refers to information of a control object operated by a tester, and includes the name and type of the control object, the position of the control object on the GUI, and the like. There is a correspondence between the event information and the control information.
Optionally, in this embodiment, the operation of the tester on the input device related to the software to be tested may be monitored through the monitoring device installed in advance, so as to obtain event information of the tester operating the software to be tested.
Optionally, in this embodiment, control information of the tester operating the software to be tested may be obtained through a preset control identification algorithm. The control identification algorithm can be executed by calling a preset control drive, and is mainly used for identifying a control object operated by a user from a GUI (graphical user interface) of software to be tested and obtaining information of the control object, such as position information and the like.
The control driver in this embodiment has functions of an Application Programming Interface (API) driver and a picture driver at the same time, and according to different driving types supported by software to be tested, the control driver may be called in the technical solution of this embodiment to use different control identification algorithms to obtain control information, which may be specifically described in embodiment two and embodiment three.
In this embodiment, in order to meet the requirement for subsequently generating a test script, the acquired event information and control information may be stored in a set manner according to different operation sequences performed by software to be tested by a tester, for example, in the format of { (event information 1, control information 1), (event information 2, control information 2), … …, (event information n, control information n) }.
And S102, generating a test script according to the event information, the control information and the preset assertion.
In this step, a test script is automatically generated according to the event information and the control information acquired in S101 and a preset assertion.
Wherein, the assertion is essentially some Boolean expressions, which are set in advance. In the automatic testing process, the assertion is mainly used for setting a check point for the testing step to realize the expected judgment of the software running result, and if the testing result accords with the expectation, the expected testing requirement which can be reached by the testing step is indicated; otherwise, the testing step is not able to meet the expected testing requirement. Therefore, the assertion can replace manual work to judge whether the GUI of the software to be tested conforms to the expected test result.
In order to make the assertion function more powerful and support more complex logic verification, in this embodiment, the assertions are classified in advance, so that the corresponding assertions can be selected for testing, and a test script can be generated.
From the content of the assertion, in this embodiment, the assertion is divided into a control-level assertion, a picture assertion, and a combined assertion:
(1) control level assertion (ControlAssert): and is used for judging whether the related attribute of the specified control in the page is in accordance with the expectation. The method can be particularly used for judging whether the control exists or not, judging the regular control attribute character string, judging the size of the control attribute numerical value and the like.
(2) Picture assertion (PicAssert): and under the condition that the control element cannot be obtained, judging whether the picture in the picture section is correctly displayed on the program running interface or not through an image comparison technology. The granularity of picture assertions is coarser relative to control-level assertions.
(3) Combinatorial assertion (combinaeassert): the control-level assertions and the picture assertions are combined according to certain logic rules to form assertions, and the assertions and the picture assertions can achieve more complex assertion functions including sequence control, if-else condition control and the like.
From the assertion type, in the present embodiment, the assertion can be divided into a hard assertion and a soft assertion:
(1) hard-assertion: when the hard interrupt has an error, the test program is terminated and the error is thrown out, and the rest of the test steps are not executed any more.
(2) Soft-assertion: when the assertion has an error, the test program is not immediately terminated, the rest of the test steps and the assertions are continuously executed, and all errors are thrown out after all soft assertions are finished.
Optionally, according to the above assertion classification, in this embodiment, an assertion interface may be defined in advance, a class of the assertion interface may be defined, and different types of assertions may be obtained and executed through different classes. Exemplarily, fig. 2 is a schematic diagram of an assertion interface provided in an embodiment of the present application, and as shown in fig. 2, the assertion interface in this implementation includes: a control level assertion (ControlAssert) class, a picture assertion (PicAssert) class, a combinaneassert class, an assertion exception (assertexpeption) class, and a logical combination rule (LogicTemplate) class, different classes being used to implement different interface functions of an assertion. Wherein:
the ControlAssert class mainly implements related operations of the control-level assertion and stores the position of the control in the control tree. In the playback stage, the control-level assertion finds the corresponding control object through the control position, and verifies whether the related attribute of the control object is in accordance with the expectation.
The PicAssert class mainly realizes the relevant operation of the picture-level assertion, obtains the current software interface screenshot and the assertion picture in the playback stage, and the assertion match matches whether the assertion picture exists in the software interface or not through a template matching function in the openCV.
The CombineAssert class mainly implements the related operations of the combination-level assertion, and the object pre-selects and loads a logic template, and the logic template implements two logics of a sequential structure and a branch structure. Several Assert classes are loaded into the logic template to determine the logic that the combined assertion performs.
The AssertException class mainly implements exception handling of assertion exceptions, including soft and hard assertions.
The LogicTemplate class is used to formulate logic rules and provide logic templates.
In a possible implementation manner, in this step, a test step is generated by combining the event information and the control information acquired in S101, the control information and the target assertion are combined, a step check point is generated, and a test script is generated by combining the test step and the step check point.
The target assertion refers to the selected assertion for judging the operation result of the current control information, and may be a combination of one or more of the foregoing assertion types. For different control information, the selection target assertion can be the same or different.
In this embodiment, corresponding grammar rules may be preset, including a test step generation rule, a step checkpoint generation rule, and a test script generation rule, and event information and control information are combined according to the test step generation rule to generate a test step, and control information and target assertion are combined according to the step checkpoint generation rule to generate a step checkpoint, and the test step and the step checkpoint are combined according to the test script generation rule to generate a test script.
It can be understood that, when there are multiple sets of event information and control information acquired in S101, in this step, a corresponding test step may be generated according to each set of event information and control information, and a step check point corresponding to the test step may be generated according to the control information and the corresponding target assertion in each set, and then, all the obtained test steps and step check points are combined, and finally, a test script is generated.
Optionally, in order to strengthen the management of the test script, in this embodiment, after the test script is generated, the test script may be further split into a logic portion and a data portion, and meanwhile, a mapping relationship between the logic portion and the data portion is established, where the logic portion includes information such as an event type and a control path, the data portion includes data such as input variables in the test step or variable data that needs to be compared for assertion, and the mapping relationship between the logic portion and the data portion is stored separately. Therefore, the separation of the test data and the test step logic in the test script is realized, and the modular management of the test script is facilitated. In addition, when a plurality of test scripts are provided, the test scripts are managed in the mode, so that the coupling degree among the test scripts can be reduced, the test efficiency of the automatic test is improved, and the test cost of the automatic test is reduced.
In the embodiment, the test script is generated based on the assertion, and the test script is subjected to modular management, so that the test efficiency can be improved, and the test cost can be saved.
S103, running a test script on the driving engine, carrying out GUI test on the software to be tested, and generating a test report.
In this step, the test script generated in S102 is sent to the driver engine, and the driver engine runs the test steps and step check points in the test script, performs GUI test on the software to be tested, and obtains a corresponding test report.
Optionally, the driving engine in this embodiment is a UI driving engine.
In the embodiment, the event information and the control information of the software to be tested operated by the tester are acquired, the test script is generated according to the event information, the control information and the preset assertion, the test script is operated on the driving engine, the GUI of the software to be tested is tested, and the test report is generated, so that the automatic black box test of the GUI of the software to be tested is realized, the GUI test efficiency and the accuracy of the software to be tested are improved, the performance of the software to be tested is improved, and the use experience of a user is improved. Control identification (namely acquisition of control object information) is one of the keys of the whole GUI automation test, and the identification effect has a direct influence on the correctness of the GUI automation test. In the recording stage of the regression test, mouse and keyboard events need to be captured. When the events of the mouse and the keyboard occur, the position of the mouse staying on the screen is recorded, and the operated control is identified through the position of the mouse. Based on this, the control identification method is also divided into two cases in the embodiment of the present application, that is, according to the difference of the driving types supported by the software to be tested, different control identification algorithms can be adopted for the control identification in the embodiment of the present application, and the API drive-based control identification method and the picture drive-based control identification method (which does not support API drive) are respectively introduced through the second embodiment and the third embodiment.
Example two
Exemplarily, fig. 3 is a schematic flow diagram of a control identification method provided in the second embodiment of the present application, and as shown in fig. 3, if the to-be-tested software supports API driving, in this embodiment, control information for a tester to operate the to-be-tested software is obtained through the following control identification algorithm:
s201, extracting a GUI control tree of the software to be tested through a preset API.
The control identification algorithm provided by the embodiment acquires control information based on the control tree and the mouse position. Therefore, in this step, a GUI control tree of the software to be tested needs to be extracted by calling a preset dedicated API, that is, a preset API, where the GUI control tree includes all control information, such as positions, types, names, and index values of the controls in the GUI interface, currently displayed by the software to be tested when a tester operates the software to be tested. Accordingly, a node in the GUI control tree corresponds to control information for a control object in the GUI interface.
S202, obtaining mouse position information when a tester operates the software to be tested.
Since the control information is related to the position information of the mouse when the tester operates the software to be tested, in this step, the position information of the mouse when the tester operates the software to be tested is also required to be obtained in order to obtain the control information related to the operation of the tester.
The mouse position information refers to position coordinate information of a positioning cursor of the mouse on the GUI, and may be exemplarily represented by (x, y), where x represents a coordinate of the positioning cursor of the mouse in the GUI horizontal direction, and y represents a coordinate of the positioning cursor of the mouse in the GUI vertical direction.
In this step, the acquisition of the mouse position information may be performed by a pre-programmed mouse position capture algorithm or a mouse position capture tool installed on the device.
It is understood that this step may be executed before or after S201, or may be executed simultaneously with S201, and there is no strict execution sequence between S202 and S201, which is not limited herein.
And S203, traversing the control object in the GUI control tree according to the mouse position information to obtain control information.
In this step, based on the mouse position information acquired in S202, all nodes (control objects) in the entire GUI control tree are searched in a depth-first traversal manner, so as to obtain control information corresponding to the mouse position information.
Specifically, whether a target control object matched with the mouse position information exists in the GUI control tree or not is determined, namely the control object consistent with the mouse position information exists, if yes, information of the target control object is obtained, and the information of the target control object is determined as control information of a tester operating the software to be tested.
For example, the control identification algorithm provided by the present embodiment can be expressed as:
Figure BDA0003242122150000061
some GUI control objects can be displayed in corresponding GUI only after responding to a specific user event, in order to ensure that all possible GUI control objects can be traversed, a reliable test script is generated, the coverage rate of the GUI automatic test is improved, further, if a target control object matched with the mouse position information does not exist in the GUI control tree, in the embodiment, a preset dynamic traversal algorithm is adopted, the user event is sent to the GUI of the software to be tested, so that all hidden control objects in the GUI are triggered to be displayed, all hidden control objects in the GUI are traversed according to the mouse position information, and the control information of the software to be tested operated by a tester is obtained.
In a possible implementation manner, fig. 4 is a schematic flow chart of a dynamic traversal algorithm provided in the second embodiment of the present application, and as shown in fig. 4, in this implementation manner, the dynamic traversal algorithm includes:
s301, acquiring a first GUI state of the software to be tested.
For convenience of distinguishing, a current GUI state obtained before sending a user event to the GUI of the software to be tested is called a first GUI state, and a window state set corresponding to the first GUI state is called a first window state set, where the first window state set includes a window state of a first top-level active window corresponding to the first GUI state and window states of other windows except the first top-level active window.
S302, according to the first GUI state, determining to send a user event sequence to the GUI of the software to be tested.
In this embodiment, all GUI states and user events related to the hidden control object of the software to be tested may be obtained in advance, and the state sequence table and the event sequence table may be generated according to the occurrence sequence of the GUI states and the correspondence between the GUI states and the user events.
Correspondingly, in this step, a user event sequence for performing GUI state traversal is obtained in different ways according to whether a top-level active window in the current GUI state (i.e., in the first GUI state) contains a configuration file.
In order to meet the requirement of dynamic traversal, in this embodiment, for the case that the name of the top-level active window is not fixed, a user action sequence is defined in advance, and the sequence includes a control path and event information of the operated control and is stored in a configuration file. Here, it may be determined whether the top-level active window of the current GUI contains a configuration file by looking up the window state of the top-level active window in the first set of window states, i.e., the top-level active window { O, P, V } triple.
Specifically, if the user event sequence is contained, acquiring the user event sequence from the configuration file; if not, acquiring a position index of the first GUI state in the state sequence table, and acquiring a user event sequence from the event sequence table according to the position index, where it can be understood that the user event sequence includes a user event corresponding to the position index in the event sequence table, a last user event in the event sequence table, and all user events between the user event corresponding to the position index and the last user event.
And S303, sending a user event to the GUI of the software to be tested according to the user event sequence so as to trigger the hidden control object in the GUI to be displayed.
In this step, according to the obtained user event sequence in S302, a preset dynamic detection strategy is adopted to send a user event to the GUI of the software to be tested, so as to trigger the hidden control object in the GUI to be displayed. The method comprises the steps of sending a user event to a certain control in the current GUI state, extracting the current GUI state again after the software to be tested completes the response of the user event, triggering the next user event, repeating the process until the traversal of all GUI states of the software to be tested is completed, and achieving the purpose of displaying all hidden control objects in the GUI.
To further illustrate the implementation principle of S303 in this embodiment, exemplarily, fig. 5 is a schematic view of a traversal process provided in the second embodiment of the present application, as shown in fig. 5, in this embodiment, the traversal process based on the user event sequence includes:
s401, acquiring a target user event from the user event sequence.
The target user event may be any user event in the user event sequence. It can be understood that, at the beginning of the traversal, the target user event is the first user event in the user event sequence, and in the subsequent traversal, the target user event is the next user event of the current user event in turn.
S402, sending the target user event to the GUI.
In this step, the target user event is sent to a control object in the first GUI state, where the control object may be a control object associated with the target user event.
And S403, when the GUI responds to the target user event, acquiring a second GUI state of the software to be tested.
In this step, when it is monitored that the GUI responds to or executes the target user event, the current GUI state of the software to be tested is acquired again, and for convenience of distinguishing, the current GUI state acquired after the target user event is sent to the GUI of the software to be tested is called a second GUI state, and a window state set corresponding to the second GUI state is called a second window state set, where the second window state set includes a window state of a second top-level active window corresponding to the second GUI state and window states of other windows except the second top-level active window.
S404, determining whether the GUI state of the software to be tested is transferred or not according to the second window state set and the first window state set.
In this step, whether the GUI state of the software to be tested is migrated is determined by comparing the second window state set of the second GUI state with the first window state set of the first GUI state. If the transition occurs, which means that the second GUI state is different from the first GUI state, S405 is executed, and after S405 is executed, S401 is returned to, that is, a new target user event is continuously acquired, and traversal of all GUI states is completed. If no transition occurs, which means that the second GUI status is different from the first GUI status, it is clear that the target user event is a termination event, then S406 is executed.
It will be appreciated that the essence of this step is to determine whether the adjacent two GUI states are the same, i.e. whether the new GUI state is the same as the previous GUI state. When the processes of S401-S404 are repeatedly executed, the newly acquired GUI status, and so on, may be respectively called a third GUI status, a fourth GUI status, a fifth GUI status, and so on.
S405, storing the first GUI state into a migration state list in the cache, and storing the target user event into a migration event list in the cache.
In this step, the first GUI state and the target user event are stored in the cache, so that the first GUI state and the target user event are not repeatedly executed in the subsequent traversal process, thereby improving the traversal efficiency.
And S406, marking the target user event as a termination event.
Through the above S401-S406, it can be ensured that no loop back problem occurs in the traversal process, that is, a dead-cycle phenomenon occurs between two GUI states, by determining whether the two GUI states before and after are the same in the traversal process, and storing the traversed GUI state and the corresponding migration event in the cache, thereby improving the traversal efficiency.
To further explain the implementation principle of S404, exemplarily, fig. 6 is a schematic view of a GUI state transition determination process provided in the second embodiment of the present application, and as shown in fig. 6, in this embodiment, it is determined whether the GUI state transition occurs in the software to be tested through the following steps:
s501, a first window state set and a second window state set are obtained.
In this step, a first window state set corresponding to the first GUI state and a second window state set corresponding to the second GUI state are obtained, respectively.
S502, determining whether the window state quantity in the second window state set is the same as that in the first window state set.
Since the window state of the top-level active window and the window states of other windows are included in the window state set, GUI state transition is usually reflected in the number of window states (i.e., the number of windows) or the difference of the top-level active windows. For this reason, in this step, it is determined whether the two GUI states are the same according to whether the number of window states in the window state set corresponding to different GUI states is the same, if so, S503 is executed to make a further determination, and if not, S505 is executed.
S503, respectively obtaining a second top-level active window control tree in the second GUI state and a first top-level active window control tree in the first GUI state according to the second window state set and the first window state set.
When it is determined according to S502 that the numbers of window states in the window state sets of the two GUI states are the same, in this step, further, according to the window state sets of the two GUI states, corresponding top-level active window control trees are respectively generated, so as to determine whether the two GUI states are the same according to the controls in the top-level active window. For the convenience of distinguishing, a top-level active window control tree corresponding to the first GUI state, that is, a top-level active window control tree generated according to the first window state set is referred to as a first top-level active window control tree; and calling the top-layer active window control tree corresponding to the second GUI state, namely the top-layer active window control tree generated according to the second window state set, as a second top-layer active window control tree.
S504, whether the control objects in the second top-level active window control tree and the first top-level active window control tree are the same or not is determined.
On the basis of S503, in this step, it is determined whether the control objects in the top-level active window corresponding to the two GUI states are the same according to the obtained two top-level active window control trees. And if the number and the types of the control objects in the two control trees are the same, the control objects in the two top-level active windows are the same, and S506 is executed, otherwise, the control objects in the two top-level active windows are different, and S505 is executed.
And S505, determining that GUI state transition of the software to be tested occurs.
And S506, determining that the GUI state of the software to be tested does not migrate.
In the embodiment, when the control information of the software to be tested operated by a tester cannot be acquired by adopting a conventional traversal method, a preset dynamic traversal algorithm is adopted to send a user event to the GUI of the software to be tested so as to trigger all hidden control objects in the GUI to be displayed, and then all hidden control objects in the GUI are traversed according to the position information of the mouse so as to ensure that all control objects in the GUI are traversed, so that reliable control information is obtained, the usability and the reliability of the generated test script are improved, the coverage of the GUI automated test is improved, and the accuracy of the GUI automated test is improved.
Optionally, before S302, in this embodiment, it may also be determined whether a first GUI state exists in the migration state list in the cache, if the first GUI state does not exist, it is determined that the first GUI state has not been traversed, and S302-S303 are continuously executed, and if the first GUI state exists, it is determined that the first GUI state has been traversed, a corresponding migration event may be deleted from the user event sequence, even if the GUI of the software to be tested changes into the migration event of the first GUI state, so that it is ensured that the traversal process can be smoothly performed in the subsequent testing process, and the traversal sufficiency is improved.
Optionally, before S302, in this embodiment, a top-level active window value of the first GUI state may also be obtained first, and the obtained top-level active window style value is compared with a top-level active window style value in a style blacklist, to determine whether the top-level active window value of the first GUI state is included in the style blacklist, if not, S302-S303 is executed, otherwise, the corresponding migration event is deleted from the user event sequence.
In this embodiment, the style of the top-level active window may be used as an attribute of the top-level active window, and is described by the style value of the top-level active window. Accordingly, in this embodiment, the top-level active window value of the first GUI state may be obtained according to the window state set of the first GUI state. The obtaining of the top active window value of the first GUI state may also be performed by invoking a related tool, such as, without limitation, Spy + +, provided by microsoft.
Among them, what exists in the style blacklist are some GUI window style values that are not related to the main function of the software under test, such as GUI window style values that are not necessarily functional. By setting the style blacklist, the traversal program can be effectively prevented from being limited in a certain unimportant GUI window, and therefore traversal efficiency is further improved.
Exemplarily, fig. 7 is a schematic flowchart for providing another dynamic traversal algorithm according to the second embodiment of the present application. In a specific implementation process, as shown in fig. 7, depth-first traversal based on trigger may be implemented on a control tree in the following specific steps:
s601, initializing a migration state list and a migration event list in the cache, acquiring a current GUI state and a current trigger event, and respectively storing the current GUI state and the current trigger event into the migration state list and the migration event list.
S602, judging whether the transition state list is empty.
In this step, it is determined whether the current GUI state is successfully acquired by determining whether the migration state list is empty, and if so, S603 is executed, otherwise, the process is ended.
S603, whether the current GUI state contains the configuration file or not.
In this step, it is determined whether the current GUI state is associated with a configuration file according to the window state set of the current GUI state, where the configuration file includes a corresponding user event sequence, if so, S604 is executed, and if not, S605 is executed.
S604, triggering the user event in the configuration file, and recording the user event, thereby ensuring that the subsequent traversal process cannot be repeatedly executed.
And S605, traversing the control nodes, namely the control objects, of the top-level active window of the current GUI state by adopting a tree-type preference strategy (DFS).
S606, determining whether the traversable control node exists in the current GUI state and whether a user event which can be triggered exists, if the traversable control node and the user event exist at the same time, executing S610, and if not, executing S607-S609.
S607, marking the current GUI state as traversed.
In this step, the current GUI state may be marked as traversed by modifying a state parameter of the current GUI state, such as current.
And S608, triggering a termination event of the current GUI state.
S609, returning the new GUI state (NewGUIState) as an empty set.
S610, saving the current trigger event as the last event (lastEvent) attribute of the current GUI state, and triggering the event to acquire a new GUI state.
S611, judging whether the GUI states before and after the triggering event occurs are the same GUI state.
In this step, it may be determined whether the GUI states before and after the occurrence of the trigger event are the same GUI state, that is, whether GUI state transition occurs, by the methods of S501 to S506. If not, go to S612, otherwise, go to S617.
S612, return this new GUI status (newgui status), which is not empty.
S613, judging whether the acquired new GUI state (NewGUIState) is an empty set. If the set is empty, S614 is executed, otherwise, S615 is executed.
And S614, removing the last GUI state in the transition state list.
S615, whether the acquired new GUI state (NewGUIState) belongs to the transition state list is judged. If so, then perform S617, otherwise, perform S616.
S616, adding the latest NewGUISTATE into the state transition list, and adding the transition event of NewGUISTATE into the event list.
S617, marking the transition event of the current GUI state as a termination event.
In the embodiment, the GUI control tree of the software to be tested is extracted through the preset API, the GUI control tree comprises all control information in the GUI, mouse position information when a tester operates the software to be tested is obtained, and the control object in the GUI control tree is traversed according to the mouse position information to obtain the control information, so that the control identification based on the API drive is realized, the accuracy of the control identification is improved, the accuracy of the GUI automatic test is further improved, the performance of the software to be tested is improved, and the use experience of a user is improved.
EXAMPLE III
Exemplarily, fig. 8 is a schematic flow chart of a control identification method provided in a third embodiment of the present application, and as shown in fig. 8, in this embodiment, if the software to be tested supports picture driving, control information for a tester to operate the software to be tested is obtained through the following control identification algorithm:
s701, obtaining a GUI screenshot and mouse position information when a tester operates software to be tested.
In this embodiment, a control recognition algorithm is provided, and control information is obtained based on the GUI screenshot and the mouse position information, for this reason, in this step, the GUI screenshot and the mouse position information corresponding to each time the tester operates the software to be tested need to be obtained first.
In this embodiment, the GUI screenshot may be obtained through a manual screenshot or an automatic screenshot. The manual screenshot refers to that a user manually selects the size and the position of a control screenshot and manually selects an event needing to be triggered; the automatic screenshot refers to automatically acquiring a control screenshot with a proper size and position by a program when a user normally operates the software to be tested, and automatically recording a user event. Fig. 9 is a logic diagram for acquiring a GUI screenshot based on a combination of an automatic screenshot and a manual screenshot provided in the third embodiment of the present application. In the actual operation process, the GUI screenshot and the mouse position information can be obtained by selecting the manual screenshot and the automatic screenshot according to the scene and the requirement, which is not limited herein.
Optionally, in this step, a GUI screenshot of a tester operating the software to be tested is obtained based on a control area detection algorithm of the software interface screenshot, so as to improve the degree of automation and reduce manual involvement. When a tester operates software to be tested each time, the control area detection algorithm can automatically capture a GUI screenshot with a proper size and position, and the GUI screenshot generally comprises character information, picture characteristics and the like.
In this step, the representation and the obtaining of the mouse position information are similar to those in S202, and reference may be made to the description of S202, which is not repeated herein.
And S702, performing image processing on the GUI screenshot by adopting a preset image processing algorithm to obtain a standard GUI screenshot.
In order to facilitate the acquisition of the control information, in this step, a preset image processing algorithm needs to be first adopted to process the original GUI screenshot acquired in S701, so as to obtain a GUI screenshot meeting the requirement of extracting the control information.
In a possible implementation manner, in this step, the image processing on the GUI screenshot includes binarization processing and noise reduction processing, so as to obtain a standard GUI screenshot.
Optionally, in this embodiment, a cvtColor function is first adopted to convert the color GUI screenshot into a gray-scale image, then a Sobel function is used to perform convolution on each pixel in the gray-scale image in the x direction and the y direction, the convolution results are added to obtain a gray-scale approximation value of each pixel, and then the gray-scale image is converted into a binary image according to the gray-scale approximation value of each pixel and a preset threshold, so as to obtain an outline binary image of the GUI screenshot.
Optionally, in this embodiment, based on mathematical morphology, a certain number of erosion treatments and expansion treatments are performed on the pixel points in the contour binary image in the x direction and the y direction, so as to perform noise reduction treatment on the contour binary image, and obtain a standard GUI screenshot.
Specifically, the 3 × 1 convolution kernel may be used to perform a plurality of corrosion operations on the contour binary image, and then perform a plurality of expansion operations on the contour binary image, so as to obtain a long white line segment in the horizontal direction of the contour binary image; then using the convolution kernel of 1 x 3 to carry out a plurality of corrosion operations on the outline binary image, and then carrying out a plurality of expansion operations on the outline binary image, thereby obtaining a long white line segment in the vertical direction of the outline binary image; finally, the long white line segments (noise) are subtracted from the contour binary image, and closed operation processing is performed, so that a prominent white rectangular frame is obtained.
Mathematical morphology is widely used as a branch of image analysis, which defines structural elements with certain shapes and sizes in advance, and uses the structural elements to extract corresponding shapes in images, thereby achieving the purpose of image analysis and recognition. Mathematical morphology has 2 basic operations, namely dilation operation and erosion operation:
and (3) expansion operation: first, an arbitrarily shaped convolution kernel B is defined, typically a square of 3 x 3 squares, the center point of which is called the anchor point. And when the image A is subjected to expansion processing, sliding the convolution kernel B over the image A, and finding out the maximum pixel value in the convolution kernel area as the anchor pixel value. The effect of the final completion is thus to enlarge the highlight areas in the picture. The convolution kernel acts as a filter during the expansion process. The main function is to fill some holes in the target area and to eliminate small particle noise contained in the target area.
And (3) corrosion operation: corresponding to the expansion operation, when the image is corroded, the convolution kernel B slides through the image A, and the minimum pixel value in the kernel area is found out to be used as the anchor pixel value. Thus, the final effect is to reduce the highlight area and expand the black area in the picture. The main function is to extract backbone information and remove small and meaningless objects such as burrs, isolated pixels and the like.
And the erosion operation and the dilation operation are combined to form an open operation and a closed operation. The method comprises the following steps of performing opening operation, namely performing corrosion treatment on an image, and then performing expansion treatment, wherein the opening operation is mainly used for eliminating small objects, smoothing the boundaries of large objects, disconnecting narrow connection and simultaneously not obviously changing the area of the original image; and (3) performing closed operation, namely performing expansion processing on the image, and then performing corrosion processing, wherein the closed operation is mainly used for effectively filling black holes in the object and connecting adjacent objects without obviously changing the area of the original image.
In this embodiment, the standard GUI screenshot obtained through the above algorithm processing includes some prominent white rectangular boxes, and these white rectangular boxes are control edges in the identified GUI screenshot.
And S703, determining a control screenshot according to the mouse position information and the standard GUI screenshot.
In the step, a white rectangular frame corresponding to the standard GUI screenshot is obtained according to the position information of the mouse, and the screenshot of the position of the white rectangular frame is intercepted in the standard GUI screenshot to obtain a control screenshot.
And S704, determining control information according to the central point coordinate information of the control screenshot.
In this step, the center point of the control screenshot obtained in S703 is identified, coordinate information of the center point is determined, and the coordinate information of the center point is used as the position of the control operated by the user in the GUI, that is, control information.
Illustratively, the control identification algorithm provided by the present embodiment specifically includes the following steps:
step 1, obtaining a GUI screenshot when a user operates software to be tested, and obtaining coordinates of a mouse in the GUI screenshot, namely mouse position information.
And 2, converting the GUI screenshot into a gray image through a cvtColor function.
And 3, performing edge detection calculation on the gray-scale image obtained in the step 2 by adopting a Sobel function.
The Sobel function is an image edge detection algorithm widely used in computer graphics, and it mainly uses two 3 × 3 matrices to perform convolution operation on an original image, thereby calculating the estimated values of gray values in the horizontal and vertical directions, i.e. gray approximate values, respectively.
Illustratively, the Sobel function first convolves each pixel in the gray scale map in the x-direction and the y-direction by the following formula:
Figure BDA0003242122150000111
Figure BDA0003242122150000112
wherein A is a pixel point in the gray scale image, Gx、GyWhich are the result of the planar convolution of the grey map in the horizontal and vertical directions, respectively, as well as in the horizontal and vertical directions.
And then obtaining the gray level approximate value of each pixel point by the following formula:
G=|Gx|+|Gy| (3)
the Sobel function detects the edge by adopting the phenomenon that the gray weighting difference of upper and lower adjacent points and left and right adjacent points of a pixel point reaches an extreme value at the edge, and can provide more accurate edge direction information.
And 4, carrying out binarization processing on the gray-scale image after calculation processing in the step 3 to obtain a contour binary image. And if the gray approximate value of each pixel point in the processed gray map is compared with the preset threshold value, the gray approximate value which is larger than the preset threshold value is recorded as 255, and the gray approximate value which is smaller than the preset threshold value is recorded as 0, so that the contour binary map is obtained.
And 5, carrying out corrosion treatment and expansion treatment on the contour binary image obtained in the step 4 for a certain number of times in the x direction and the y direction to obtain long white line segments in the vertical direction and the horizontal direction in the contour binary image.
And 6, removing the long line segment in the contour binary image.
And 7, performing closed operation on the contour binary image obtained in the step 6 to obtain a protruded white rectangular frame, and eliminating black points inside the white rectangular frame in the contour binary image to enable the white rectangular frame to be protruded into a complete rectangle.
And 8, identifying all the prominent white rectangular frames obtained in the step 7, and filtering out some small rectangular noises.
And 9, acquiring a corresponding white rectangular frame (such as a white rectangular frame coincident with or closest to the position of the mouse) according to the position information of the mouse, and capturing a screenshot of the position of the rectangular frame in the GUI screenshot to obtain a control screenshot.
And 10, judging whether the white rectangular frame obtained in the step 9 exists only in the GUI screenshot, if so, jumping to the step 12, otherwise, jumping to the step 11.
And step 11, respectively expanding 10 pixels outwards for the upper part, the lower part, the left part and the right part of the white rectangular frame obtained in the step 9 in the GUI screenshot range, and re-screenshot and returning to the step 10.
And step 12, determining the coordinates of the central point of the control screenshot, and recording to obtain control information.
In the implementation process, the embodiment may implement the above algorithm process by using an OpenCV related function. The OpenCV is provided with a C + + open source library which can support the realization of computer vision related technologies, and specific contents comprise bottom-layer image processing, middle-layer image analysis and high-layer vision technologies.
Specifically, OpenCV is called using Java first, and therefore OpenCV needs to be initialized, and a dll library of OpenCV needs to be loaded before the main program is driven. Different Windows runtime environments load different dll libraries in OpenCV. And secondly, after loading the dll library, acquiring a GUI screenshot of the application software to be tested, and storing the GUI screenshot into a Mat format. The Mat format is a data type that OpenCV is fundamental, and is a multi-dimensional multi-channel array. Further, the GUI screenshot is subjected to image processing using correlation functions including a cvtColor function, a Sobel function, a dilate function, an enode function, and findContours function.
The cvtColor function is used to convert an image from one color space to another color space, such as RGB color to HSV, HSI, and so on, and may also be converted into a gray scale image. The Sobel function is used for obtaining a derivative of the image in the x or y direction to achieve edge detection calculation, the partition function is used for performing expansion processing on the image, the anode function is used for performing corrosion processing on the image, and the findContours function is used for identifying and detecting a white rectangular frame in the image. In the embodiment, the GUI screenshot and the mouse position information of the software to be tested are obtained when the tester operates the software to be tested, a preset image processing algorithm is adopted to perform image processing on the GUI screenshot to obtain the standard GUI screenshot, the control screenshot is determined according to the mouse position information and the standard GUI screenshot, and the control information is determined according to the central point coordinate information of the control screenshot, so that control identification based on picture driving is realized, the control identification accuracy is improved, the GUI automatic test accuracy is further improved, the improvement on the performance of the software to be tested is facilitated, and the use experience of a user is improved.
Example four
Fig. 10 is a schematic structural diagram of a black box automated testing apparatus based on a gui according to a fourth embodiment of the present application, and as shown in fig. 10, the black box automated testing apparatus 10 based on a gui in this embodiment includes:
a recording module 11 and a playback module 12.
The recording module 11 is used for acquiring event information and control information of a tester operating the software to be tested; generating a test script according to the event information, the control information and a preset assertion;
and the playback module 12 is configured to run the test script on a driving engine, perform GUI test on the software to be tested, and generate a test report.
Optionally, the recording module 11 is specifically configured to:
monitoring the input equipment, and recording event information of a tester operating the software to be tested;
and acquiring control information of the software to be tested operated by the tester through a control identification algorithm.
Optionally, if the software to be tested supports API driving, the recording module 11 is specifically configured to:
extracting a GUI control tree of the software to be tested through a preset API, wherein the GUI control tree comprises all control objects of the software to be tested;
acquiring mouse position information when a tester operates the software to be tested;
and traversing the control object in the GUI control tree according to the mouse position information to obtain the control information.
Optionally, the recording module 11 is specifically configured to:
determining whether a target control object matched with the mouse position information exists in the GUI control tree or not;
and if so, acquiring the information of the target control object to obtain the control information.
Optionally, the recording module 11 is further configured to:
if not, sending a user event to the GUI of the software to be tested by adopting a preset dynamic traversal algorithm so as to trigger the hidden control object in the GUI to be displayed;
and obtaining the control information according to the mouse position information and the hidden control object in the GUI.
Optionally, the recording module 11 is specifically configured to:
acquiring a first GUI state of the software to be tested, wherein the first GUI state comprises a first window state set of a current GUI;
determining a user event sequence sent to the GUI of the software to be tested according to the first GUI state;
and sending a user event to the GUI of the software to be tested according to the user event sequence so as to trigger the hidden control object in the GUI to be displayed.
Optionally, the recording module 11 is specifically configured to:
determining whether a top-level active window of the current GUI contains a configuration file according to the first window state set;
if yes, acquiring a user event sequence from the configuration file;
and if not, determining the position index of the first GUI state in the state sequence table, and acquiring the user event sequence from the event sequence table according to the position index.
Optionally, the recording module 11 is specifically configured to:
acquiring a target user event from the user event sequence;
sending the target user event to the GUI;
when the GUI responds to the target user event, acquiring a second GUI state of the software to be tested, wherein the second GUI state comprises a second window state set of the current GUI;
determining whether the GUI state of the software to be tested is transferred or not according to the second window state set and the first window state set;
and if the migration occurs, respectively storing the first GUI state and the target user event into a cache migration state list and a migration event list, and acquiring the next target user event from the user event list.
Optionally, the recording module 11 is specifically configured to:
determining whether the number of window states in the second set of window states is the same as the number of window states in the first set of window states;
if the window state quantity is different, determining that the GUI state transition of the software to be tested occurs;
if the window state quantity is the same, respectively acquiring a second top-level active window control tree of the second GUI state and a first top-level active window control tree of the first GUI state according to the second window state set and the first window state set;
determining whether the control objects in the second top-level active window control tree are the same as the control objects in the first top-level active window control tree;
and if the control objects are different, determining that the GUI state of the software to be tested is transferred.
Optionally, the recording module 11 is further configured to:
determining whether the first GUI state exists in a migration state list in a cache;
correspondingly, the recording module 11 is specifically configured to:
and if the first GUI state does not exist in the migration state list, determining a user event sequence sent to the GUI of the software to be tested according to the first GUI state.
Optionally, the recording module 11 is further configured to: :
acquiring a top-level active window style value of the first GUI state;
determining whether the top-level active window style value is in a style blacklist;
correspondingly, the recording module 11 is specifically configured to:
and if the style value of the top-level active window is not in a style blacklist, determining a user event sequence sent to the GUI of the software to be tested according to the first GUI state.
Optionally, if the software to be tested supports picture driving, the recording module 11 is specifically configured to:
acquiring a GUI screenshot and mouse position information when a tester operates the software to be tested;
performing image processing on the GUI screenshot by adopting a preset image processing algorithm to obtain a standard GUI screenshot;
determining a control screenshot according to the mouse position information and the standard GUI screenshot;
and determining the control information according to the central point coordinate information of the control screenshot.
Optionally, the recording module 11 is specifically configured to:
converting the GUI screenshot through an objective function to obtain a contour binary image of the GUI screenshot, wherein the objective function comprises a cvtColor function and a Sobel function;
and denoising the contour binary image through morphological operation to obtain the standard GUI screenshot, wherein the morphological operation comprises expansion operation and corrosion operation.
Optionally, the recording module 11 is specifically configured to:
combining the event information and the control information to generate a test step;
combining the control information and the target assertion to generate a step check point;
combining the test step and the step checkpoint to generate the test script.
Optionally, the preset assertions include control-level assertions, picture assertions, and combination assertions.
Fig. 11 is a schematic structural diagram of another black box automated testing apparatus based on a graphical user interface according to a fourth embodiment of the present disclosure, and as shown in fig. 11, the black box automated testing apparatus 10 based on a graphical user interface according to the present embodiment further includes: a script management module 13.
In this embodiment, the test script is optimized from two aspects of data driving and modularization by setting the script management module 13, for example, fig. 12 is a schematic diagram of a design architecture of the script management module provided in the fourth embodiment of the present application, and as shown in fig. 12, in this embodiment, the test script can be divided into a logic part and a data part by the script management module 13. The logic part comprises information such as event types and control paths. The data part contains variable data to be compared, such as input variables or assertions in the testing step.
The script management interface establishes mapping management with data of the data driving layer through the script management interface, and a tester can modify variable data in the automatic test script through a GUI (graphical user interface) of the script management interface.
The data driving is mainly to establish a mapping relation between the test steps of the script and the variable data, thereby realizing the separation of the test logic and the variable data.
The script modularization mainly enables the script to execute the testing step of another script through quoting, the coupling relation can be built among script functions through proper modularization script organization, scripts of different modules can quote mutually, and the testing script is more extensive.
In this embodiment, in order to implement data driving and modularization of the script management module 13 and enable the script to have the capability of data separation and dynamic expansion, an XML file structure is adopted to store the test steps and the test variable data respectively. Because the XML file can be structurally accessed through a Document Object Model (DOM), the CURD atomic operation on the XML file can be quickly realized. In the playback phase, the test step file and the test variable file are organized together to form a complete automated test step.
In addition, the embodiment extracts variable data in the variable file through the DOM and maps the data to the graphical user interface managed by the script. The tester can modify the data part through a GUI interface to realize data-driven script management. In the test step file, script Module reference is defined through a Module tag, and when the test step file is converted into a script object, if the Module tag is encountered, the referenced script step is added into the corresponding script object step, so that modularization is realized. In the implementation process, the JAR package of DOM4J is mainly used for providing an API for parsing the XML file.
The black box automatic testing device based on the graphical user interface provided by the embodiment can execute the black box automatic testing method based on the graphical user interface provided by the method embodiment, and has corresponding functional modules and beneficial effects of the execution method. The implementation principle and technical effect of this embodiment are similar to those of the above method embodiments, and are not described in detail here.
EXAMPLE five
Fig. 13 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present application, and as shown in fig. 13, the electronic device 20 includes a memory 21, a processor 22, and a computer program stored in the memory and executable on the processor; the number of the processors 22 of the electronic device 20 may be one or more, and one processor 22 is taken as an example in fig. 13; the processor 22 and the memory 21 in the electronic device 20 may be connected by a bus or other means, and fig. 13 illustrates the connection by the bus as an example.
The memory 21 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the recording module 11 and the playback module 12 in the embodiment of the present application. The processor 22 executes various functional applications and data processing of the electronic device by running software programs, instructions and modules stored in the memory 21, namely, the black box automatic testing method based on the graphical user interface is realized.
The memory 21 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 21 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 21 may further include memory located remotely from processor 22, which may be connected to the electronic device through a grid. Examples of such a mesh include, but are not limited to, the internet, an intranet, a local area network, a mobile communications network, and combinations thereof.
EXAMPLE six
A sixth embodiment of the present application further provides a computer-readable storage medium, having stored thereon a computer program, which when executed by a computer processor, is configured to perform a method for black box automated testing based on a graphical user interface, the method comprising:
acquiring event information and control information of a tester operating software to be tested;
generating a test script according to the event information, the control information and a preset assertion;
and running the test script on a driving engine, carrying out GUI test on the software to be tested, and generating a test report.
Of course, the computer program of the computer-readable storage medium provided in this embodiment of the present application is not limited to the method operations described above, and may also perform related operations in the black box automated testing method based on the graphical user interface provided in any embodiment of the present application.
From the above description of the embodiments, it is obvious for those skilled in the art that the present application can be implemented by software and necessary general hardware, and certainly can be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a grid device) to execute the methods described in the embodiments of the present application.
It should be noted that, in the embodiment of the black box automated testing apparatus based on the gui, the included units and modules are only divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be realized; in addition, specific names of the functional units are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the application.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (19)

1. A black box automatic testing method based on a graphical user interface is characterized by comprising the following steps:
acquiring event information and control information of a tester operating software to be tested;
generating a test script according to the event information, the control information and a preset assertion;
and running the test script on a driving engine, carrying out GUI test on the software to be tested, and generating a test report.
2. The method of claim 1, wherein the obtaining event information and control information of the tester operating the software to be tested comprises:
monitoring the input equipment, and recording event information of a tester operating the software to be tested;
and acquiring control information of the software to be tested operated by the tester through a control identification algorithm.
3. The method of claim 2, wherein if the software under test supports API driving, the obtaining control information of the tester operating the software under test through a control recognition algorithm comprises:
extracting a GUI control tree of the software to be tested through a preset API, wherein the GUI control tree comprises all control objects of the software to be tested;
acquiring mouse position information when a tester operates the software to be tested;
and traversing the control object in the GUI control tree according to the mouse position information to obtain the control information.
4. The method of claim 3, wherein traversing the control object in the GUI control tree according to the mouse position information to obtain the control information comprises:
determining whether a target control object matched with the mouse position information exists in the GUI control tree or not;
and if so, acquiring the information of the target control object to obtain the control information.
5. The method of claim 4, further comprising:
if not, sending a user event to the GUI of the software to be tested by adopting a preset dynamic traversal algorithm so as to trigger the hidden control object in the GUI to be displayed;
and obtaining the control information according to the mouse position information and the hidden control object in the GUI.
6. The method according to claim 5, wherein the sending a user event to the GUI of the software under test by using a preset dynamic traversal algorithm to trigger the hidden control object in the GUI to be displayed comprises:
acquiring a first GUI state of the software to be tested, wherein the first GUI state comprises a first window state set of a current GUI;
determining a user event sequence sent to the GUI of the software to be tested according to the first GUI state;
and sending a user event to the GUI of the software to be tested according to the user event sequence so as to trigger the hidden control object in the GUI to be displayed.
7. The method of claim 6, wherein determining the sequence of user events sent to the GUI of the software under test based on the first GUI state comprises:
determining whether a top-level active window of the current GUI contains a configuration file according to the first window state set;
if yes, acquiring a user event sequence from the configuration file;
and if not, determining the position index of the first GUI state in the state sequence table, and acquiring the user event sequence from the event sequence table according to the position index.
8. The method of claim 7, wherein sending a user event to the GUI of the software under test to trigger a hidden control object in the GUI to be displayed according to the sequence of user events comprises:
acquiring a target user event from the user event sequence;
sending the target user event to the GUI;
when the GUI responds to the target user event, acquiring a second GUI state of the software to be tested, wherein the second GUI state comprises a second window state set of the current GUI;
determining whether the GUI state of the software to be tested is transferred or not according to the second window state set and the first window state set;
and if the migration occurs, respectively storing the first GUI state and the target user event into a cache migration state list and a migration event list, and acquiring the next target user event from the user event list.
9. The method of claim 8, wherein determining whether the GUI state transition of the software under test occurs according to the second window state set and the first window state set comprises:
determining whether the number of window states in the second set of window states is the same as the number of window states in the first set of window states;
if the window state quantity is different, determining that the GUI state transition of the software to be tested occurs;
if the window state quantity is the same, respectively acquiring a second top-level active window control tree of the second GUI state and a first top-level active window control tree of the first GUI state according to the second window state set and the first window state set;
determining whether the control objects in the second top-level active window control tree are the same as the control objects in the first top-level active window control tree;
and if the control objects are different, determining that the GUI state of the software to be tested is transferred.
10. The method of claim 6, wherein prior to determining the sequence of user events to send to the GUI of the software under test based on the first GUI state, the method further comprises:
determining whether the first GUI state exists in a migration state list in a cache;
correspondingly, the determining the user event sequence sent to the GUI of the software to be tested according to the first GUI state includes:
and if the first GUI state does not exist in the migration state list, determining a user event sequence sent to the GUI of the software to be tested according to the first GUI state.
11. The method of claim 6, wherein prior to determining the sequence of user events to send to the GUI of the software under test based on the first GUI state, the method further comprises:
acquiring a top-level active window style value of the first GUI state;
determining whether the top-level active window style value is in a style blacklist;
correspondingly, the determining the user event sequence sent to the GUI of the software to be tested according to the first GUI state includes:
and if the style value of the top-level active window is not in a style blacklist, determining a user event sequence sent to the GUI of the software to be tested according to the first GUI state.
12. The method according to claim 2, wherein if the software to be tested supports picture driving, the obtaining control information of the tester operating the software to be tested through a control identification algorithm comprises:
acquiring a GUI screenshot and mouse position information when a tester operates the software to be tested;
performing image processing on the GUI screenshot by adopting a preset image processing algorithm to obtain a standard GUI screenshot;
determining a control screenshot according to the mouse position information and the standard GUI screenshot;
and determining the control information according to the central point coordinate information of the control screenshot.
13. The method of claim 12, wherein the image processing the GUI screenshot using a predetermined image processing algorithm to obtain a standard GUI screenshot comprises:
converting the GUI screenshot through an objective function to obtain a contour binary image of the GUI screenshot, wherein the objective function comprises a cvtColor function and a Sobel function;
and denoising the contour binary image through morphological operation to obtain the standard GUI screenshot, wherein the morphological operation comprises expansion operation and corrosion operation.
14. The method of claim 1, wherein generating a test script according to the event information, the control information, and a preset assertion comprises:
combining the event information and the control information to generate a test step;
combining the control information and the target assertion to generate a step check point;
combining the test step and the step checkpoint to generate the test script.
15. The method of any of claims 1-14, wherein the preset assertions include control-level assertions, picture assertions, and combination assertions.
16. The method according to any one of claims 1-14, wherein after generating a test script according to the event information, the control information, and a preset assertion, the method further comprises:
splitting the test script into a logic part and a data part, and establishing a mapping relation between the logic part and the data part;
and respectively storing the logic part, the data part and the mapping relation.
17. A black box automatic testing device based on a graphical user interface is characterized by comprising:
the recording module is used for acquiring event information and control information of the software to be tested operated by a tester; generating a test script according to the event information, the control information and a preset assertion;
and the playback module is used for running the test script on a driving engine, carrying out GUI test on the software to be tested and generating a test report.
18. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the method for black box automated testing based on a graphical user interface of any one of claims 1-16 when executing the program.
19. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the method for black box automated testing based on a graphical user interface according to any one of claims 1 to 16.
CN202111021509.0A 2021-09-01 2021-09-01 Black box automatic testing method based on graphical user interface Pending CN113704125A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111021509.0A CN113704125A (en) 2021-09-01 2021-09-01 Black box automatic testing method based on graphical user interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111021509.0A CN113704125A (en) 2021-09-01 2021-09-01 Black box automatic testing method based on graphical user interface

Publications (1)

Publication Number Publication Date
CN113704125A true CN113704125A (en) 2021-11-26

Family

ID=78658806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111021509.0A Pending CN113704125A (en) 2021-09-01 2021-09-01 Black box automatic testing method based on graphical user interface

Country Status (1)

Country Link
CN (1) CN113704125A (en)

Similar Documents

Publication Publication Date Title
US10191889B2 (en) Systems, apparatuses and methods for generating a user interface by performing computer vision and optical character recognition on a graphical representation
CN108154191B (en) Document image recognition method and system
CN111292377B (en) Target detection method, device, computer equipment and storage medium
US20110148909A1 (en) Generating object representation from bitmap image
US11688061B2 (en) Interpretation of whole-slide images in digital pathology
CN116052193B (en) RPA interface dynamic form picking and matching method and system
CN111208998A (en) Method and device for automatically laying out data visualization large screen and storage medium
CN112527676A (en) Model automation test method, device and storage medium
CN109241485A (en) Relation establishing method and device are jumped between a kind of page
JP5539488B2 (en) Judgment of transparent fill based on reference background color
CN114429640A (en) Drawing segmentation method and device and electronic equipment
CN109933515B (en) Regression test case set optimization method and automatic optimization device
US20230169784A1 (en) Text processing method and apparatus, and electronic device and storage medium
US20240161474A1 (en) Neural Network Inference Acceleration Method, Target Detection Method, Device, and Storage Medium
CN112037173A (en) Chromosome detection method and device and electronic equipment
CN113704125A (en) Black box automatic testing method based on graphical user interface
US20220406082A1 (en) Image processing apparatus, image processing method, and storage medium
US20240070979A1 (en) Method and apparatus for generating 3d spatial information
KR20210014285A (en) Plant engineering application support solution based on 3D scanning
CN114860604B (en) Automatic test method, system and storage medium for automatically identifying dynamic verification code
CN114334092B (en) Medical image AI model management method and equipment
CN113537199B (en) Image boundary box screening method, system, electronic device and medium
CN115168193B (en) Software performance testing and tuning system and control method thereof
Cao et al. A Fast Thinning Algorithm of Square Hmong Character Handwriting Using Template Matching Mechanism
CN117437414A (en) Segmentation method, flow automation method, device, all-in-one machine and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination