WO2021061185A1 - Test automation of application - Google Patents

Test automation of application Download PDF

Info

Publication number
WO2021061185A1
WO2021061185A1 PCT/US2020/020045 US2020020045W WO2021061185A1 WO 2021061185 A1 WO2021061185 A1 WO 2021061185A1 US 2020020045 W US2020020045 W US 2020020045W WO 2021061185 A1 WO2021061185 A1 WO 2021061185A1
Authority
WO
WIPO (PCT)
Prior art keywords
path
user
sequence
application
generate
Prior art date
Application number
PCT/US2020/020045
Other languages
French (fr)
Inventor
Sungjin Park
Changnam AN
Sangin HAN
Hyungjong Kang
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Publication of WO2021061185A1 publication Critical patent/WO2021061185A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3438Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment monitoring of user actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3604Software analysis for verifying properties of programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound

Definitions

  • FIG. 1 is a block diagram of a test automation system according to an example.
  • FIG. 2 shows a configuration that generates a default path (Full- path) sequence according an example.
  • FIG. 3 illustrates element extraction from screen information collected in the form of extensible markup language (XML) according to an example.
  • XML extensible markup language
  • FIG. 4 illustrates path generation according to an example.
  • FIG. 5 illustrates path sequence generation according to an example.
  • FIG. 6 illustrates a path graph according to an example.
  • FIG. 7 shows a configuration that generates a user path (Fluman generated path) sequence according to an example.
  • FIG. 8 illustrates generation of a test sequence according to an example.
  • FIG. 9 illustrates path sequence optimization according to an example.
  • FIG. 10 illustrates log data, which is a base of path generation, according to an example.
  • FIG. 11 illustrates scenario sequence generation of an application according to an example.
  • FIG. 12 illustrates a configuration for execution of a test using a test sequence according to an example.
  • FIG. 13 is a flowchart of a default path sequence generation process according to an example.
  • FIG. 14 is a flowchart of a user path sequence generation process according to an example.
  • FIG. 15 is a flowchart of a user path sequence generation process according to an example.
  • FIG. 16 is a block diagram of a test automation system according to an example.
  • FIG. 17 is a flowchart of an expansion process of a test sequence according to an example.
  • FIG. 18 is a flowchart of a process for providing guide information according to an example.
  • FIG. 19 illustrates a personalized menu providing method according to an example.
  • FIG. 20 illustrates a personalized menu providing method according to an example.
  • FIG. 21 illustrates a method for providing a personalized workflow according to an example.
  • FIG. 22 illustrates a process for generating a tutorial content according to an example.
  • FIG. 23 illustrates a method for providing guide information to a user or an application user according to an example.
  • FIG. 24 is a schematic diagram of a computing device according to an example.
  • a server and a device may be composed of hardware including at least one processor, a memory, a communication apparatus, etc., and a program executed in combination with hardware is stored in a designated location.
  • the hardware may have a configuration and performance to implement an example method as described below.
  • the program may include instructions that implement an example method of operation as described with reference to the drawings and is executed in combination with hardware such as a processor and a memory.
  • transmission or provision may include not only direct transmission or provision, but also indirect transmission or provision through other paths or through other devices.
  • an expression recited in the singular may be construed as singular or plural unless the expression “one”, “single”, etc. is use.
  • Test automation technology for an application can generate a test scenario by determining and learning a test path without any developer intervention.
  • the test scenario becomes too large and the test automation time becomes long.
  • test scenarios created without developer intervention may not grasp the intention of the developer, and thus, sophisticated verification cannot be performed.
  • no specialized verification can be performed for users who use intuitive, function- driven scenarios. Accordingly, no exceptions can be verified.
  • a manual test performed by a developer has small coverage of the test scenario, and thus the verification level of the application is low.
  • the manual test performed by the developer may cause a problem in that an unverified path may be used for the test due to an unintended method of a user.
  • a test automation system that generates a test scenario to which an inferred user’s intention is reflected by using user behavior information, and uses the generated test scenario in a test of an application, and a method thereof, can be provided.
  • a described example may be one of typical test activities.
  • Such a typical test activity may include a functional test, a performance test, a user interface (Ul) & usability test, a compatibility test, an application programming interface (API) integration test, a data test, and the like.
  • the Ul & usability test may include an operational acceptance test, a design acceptance test, and a data test.
  • the operational acceptance test may include a Ul test.
  • the design test may include a visual test and a usability test.
  • the data test may test the connection uniform resource link (URL) of data such as “Image/text content test”.
  • URL connection uniform resource link
  • a test target is an application that provides a graphical user interface (GUI) and may include a web page displayed in a dedicated application or a browser.
  • GUI graphical user interface
  • the GUI may be formed of a plurality of pages.
  • Each page may include a plurality of elements.
  • Each element may include a visual unit constituting a page and may be set to interact with a user.
  • the element may include text information and/or visual information.
  • the visual information may include a button, a check-box, and the like.
  • a current page When a user action (for example, a click) specified in the element is input, a current page may be switched to a next page, or a current state may be switched to a next state. For example, when buttonl is clicked in an A page, the current page may be switched to a B page.
  • a user action for example, a click
  • the application may be tested by at least one test scenario.
  • the test scenario may be a sequence with some purpose.
  • a test scenario for sending a fax may be a sequence of connected elements that must pass sequentially from the start state to the fax send state.
  • a path sequence may be formed of sequentially connected paths.
  • one path may be defined as an activity derived by using an action as an input in a current state.
  • a path sequence for reaching a D page from the A page may be formed of, for example, three paths as shown in Table 1 .
  • FIG. 1 is a block diagram of a test automation system according to an example
  • FIG. 2 shows a configuration that generates a default path (Full- path) sequence according to an example
  • FIG. 3 illustrates element extraction from screen information collected in the form of extensible markup language (XML) according to an example
  • FIG. 4 illustrates path generation according to an example
  • FIG. 5 illustrates path sequence generation according to an example
  • FIG. 6 illustrates a path graph according to an example
  • FIG. 7 shows a configuration that generates a user path (Human-generated path) sequence according to an example
  • FIG. 8 illustrates generation of a test sequence according to an example
  • FIG. 9 illustrates path sequence optimization according to an example
  • FIG. 10 illustrates log data, which is a base of path generation according to an example
  • FIG. 11 illustrates scenario sequence generation of an application according to an example
  • FIG. 12 illustrates a configuration for execution of a test using a test sequence according to an example.
  • a test automation system 1 may include a user device 100 and a test server 200.
  • the user device 100 and the test server 200 may each be a computing device, each including a memory and at least one processor.
  • the processor is to execute instructions of a program loaded into the memory.
  • the user device 100 may be provided in plural, but it will be described as a single user device in the following description for sake of convenience.
  • the user device 100 refers to a device that loads an application, which is a test target, and tests the application by using a test scenario (i.e. , a test case) provided by the test server 200.
  • the user device 100 may be a mobile terminal, a laptop, a tablet device, and the like.
  • it is assumed that the application is tested by a developer to determine a normal operation before public distribution.
  • the application may be publicly distributed to general users and may be a test target as necessary even after the distribution.
  • the test server 200 generates a test case based on information collected from the user device 100 and may continuously update the data by expanding or modifying the information.
  • the test scenario generated by the test server 200 may be used in testing of the application.
  • the user device 100 may include an application 101 , a screen crawling engine 103, a user behavior collector 105, and a test agent 107.
  • the test server 200 may include a Ul learning module 201 , a default path sequence generator 203, a default path list database (DB) 205, a user path sequence generator 207, a user path list DB 209, a test sequence generator 211 , a test sequence DB 213, and a testing engine 215.
  • DB default path list database
  • the application 101 is a test target.
  • the application 101 may be an application residing in the user device 100 or a web application that is downloaded and executed while connected to the Internet.
  • the screen crawling engine 103 may be able to collect screen information (e.g., GUI images, screenshots, etc.) as page units while the application 101 is being executed.
  • screen information e.g., GUI images, screenshots, etc.
  • the screen crawling engine 103 may collect screen information using a “Ul automator” and a test framework provided by the mobile OS.
  • the screen crawling engine 103 may collect screen information by using “WebDriver” or “Selenium”.
  • the screen crawling engine 103 may transmit collected screen information to the Ul learning module 201 .
  • the application 101 may execute (e.g., click) all elements on the GUI image displayed on the screen according to a random command.
  • the screen crawling engine 103 may collect a GUI image and page information according to execution of the element and transmit the collected image and information to the Ul learning module 201.
  • the screen crawling engine 103 may click any element arbitrarily. In another example, the screen crawling engine 103 may select and click an element set in advance by the user.
  • the Ul learning module 201 may receive screen information (e.g., screenshots), guide information (e.g., human-guided input), or user behavior inference information (e.g., developer behavior input & intention labels).
  • screen information e.g., screenshots
  • guide information e.g., human-guided input
  • user behavior inference information e.g., developer behavior input & intention labels
  • the Ul learning module 201 may collect screen information (e.g., GUI images, page information, etc.) from the screen crawling engine 103.
  • the Ul learning module 201 may collect screen information in real time when the application 101 is executed.
  • the screen information may be collected by using AppView framework of the user device 100 where the application 101 is executed.
  • the Ul learning module 201 may receive guide information from an external source.
  • arbitrary random tests incur a long time delay.
  • the human-guided input for preparing such a case may imply a test input value generated when the developer pre-tests or a manually set value.
  • the Ul learning module 201 may receive the developer behavior input & intention labels from an external source. This may imply information labeled with respect to a specific behavior by a user.
  • the Ul learning module 201 may set machine learning data based on the input information.
  • the Ul learning module 201 may perform a random test that randomly requires the screen crawling engine 103 to execute a screen of a crawling target.
  • the screen information collected by the Ul learning module 201 may include source codes (e.g., XML).
  • source codes e.g., XML
  • the screen information collected from a web page by the Ul learning module 201 may be as shown in Table 2.
  • the Ul learning module 201 may extract (e.g., visually recognize) elements from the screen information.
  • the Ul learning module 201 may learn (e.g., Ul learning) the extracted elements.
  • the elements used when the Ul learning module 201 is learning may be as shown in Table 3.
  • FIG. 3 An example process for the Ul learning module 201 to extract elements is shown in FIG. 3.
  • a login page P10 crawled by the screen crawling engine 103 may include an E-mail address input field P11 , a password input field P13, and an authorize button P15.
  • source codes that represent the respective fields P11 , P13, and P15 are displayed.
  • the Ul learning module 201 may extract elements from the source codes. For example, the Ul learning module 201 may extract elements such as “E-mail Address”, “clickable”, “enabled”, “focusable”, and the like from the source codes that represent the E-mail address input field P11. The Ul learning module 201 may extract elements such as 'resource-id', 'password', 'android. widget.EditText', 'clickable', and the like from the source codes that represent the Password input field P13. The Ul learning module 201 may extract elements such as 'Authorize', 'resource-id', 'android. widget. Button', 'clickable', 'focusable', and the like from the source codes that represent the Authorize button P15.
  • elements such as “E-mail Address”, “clickable”, “enabled”, “focusable”, and the like from the source codes that represent the E-mail address input field P11.
  • the Ul learning module 201 may extract elements such as 'resource-id
  • the Ul learning module 201 may extract types and formats of elements from GUI image source codes that represent the GUI images.
  • the Ul learning module 201 may extract an element bitmap image based on a coordinate value of an element known in advance. Keras and TensorFlow, which are deep learning libraries that support a convolutional neural network (CNN) method, may be used to determine the meaning of an element image.
  • a random forest algorithm of Natural Language Toolkit (NLTK) and scikit-learn may be utilized for classification of text information extracted from the element image.
  • the Ul learning module 201 may use the extracted elements as training data of a machine learning model.
  • the Ul learning module 201 may infer (i.e. , auto labeling) the intention of screen information by learning the extracted elements.
  • the meaning and purpose of these elements can be derived. Based on the meaning and purpose of the elements, the intent of the screen on which these elements are displayed can be inferred as a login page.
  • the Ul learning module 201 may extract a plurality of elements constituting a plurality of pieces of screen information from the plurality of screenshots constituting the application. The extracted elements can be learned. [0069] The Ul learning module 201 may infer the meaning of the extracted element by determining a text attribute and an object of the extracted element and may assign a label based on an inference result. For example, a shopping cart may be labeled with a shopping cart-shaped bitmap. In this case, the meaning may be inferred by using machine learning on an element whose meaning cannot be inferred for the attribute and the purpose.
  • Ul learning is a process of inferring and labeling the meaning of elements included in the collected screenshot.
  • the Ul learning module 201 may output the learning result and inferred intention of the elements, that is, the Ul learning result to the default path sequence generator 203.
  • the default path sequence generator 203 may generate a path sequence with elements that are sequentially connected using the Ul learning result received from the Ul learning module 201.
  • the path sequence since the path sequence is generated for all pages constituting the application, it may be referred to as a default path sequence.
  • the default path sequence is created automatically by a program without user action, so it is used as the default.
  • the default path sequence generator 203 may search a menu structure using elements and generate a sequence based on the menu structure. Flere, when an element in the GUI image is clicked, paths for moving to another GUI page are continuously connected by the path sequence.
  • buttons in the A page which is a GUI page of the current state.
  • the buttonl in the A page When a user clicks the buttonl in the A page, the A page is maintained. That is, no conversion occurs to another page.
  • pages are illustrated, but the page may be expressed as a state.
  • a transition in the same node or a transition between two nodes may be defined as a state transition, and the state transition represents a path.
  • a path that moves to another page when an element in the page is clicked may be referred to as a traverse path.
  • a path that moves to another page when an element in the page is clicked is simply referred to as a path hereinafter.
  • buttons in the A page which is a GUI page of a current state.
  • the page is switched to the B page (CD).
  • the page is switched to the C page ((2)).
  • the C page is maintained ((3)).
  • the user clicks the buttonl in the C page the page is switched to the D page ( ⁇ ).
  • a message e.g., congratulations
  • the page is switched to the next page corresponding to the button or the current page is maintained.
  • Each operation is called a path, and a path sequence is formed by sequentially connecting the respective paths. That is, when the A page is considered an initial page and the D page is considered a destination page, a path sequence for execution of a random menu function is(J)®(2)®(3)®@.
  • the default path sequence generator 203 can generate all possible sequences using the elements.
  • the default path sequence generator 203 may generate a path sequence using a tree search algorithm, a model-based testing method, and the like.
  • the default path sequence generator 203 may generate a path sequence.
  • the default path sequence generator 203 may generate a path sequence as shown in (A) by connecting each state or a state transition in the page, or a state transition between different states or pages.
  • the generated path sequence may be stored in the default path list DB 205.
  • a path graph may be generated as shown in (B) of FIG. 6.
  • the path graph may be a set of sequences of which paths are continuously connected.
  • the path graph may be divided into a default path sequence and a user path sequence.
  • the default path sequence generator 203 generates all the possible paths in the application 101 and may generate sequences by continuously connecting the paths.
  • the generated path sequences are generated based on the screen information crawled in the application 101 and may be defined as a default sequence.
  • the path graph formed of the default path sequences may be expanded by adding user path sequences.
  • user path sequence may be generated by the user path sequence generator 207, an example of which will now be described.
  • the user behavior collector 105 of the user device 100 collects execution information of the application 101 , and may transmit the collected execution information to the user path sequence generator 207.
  • the user behavior collector 105 may operate in the background under the operating system of the user device 100.
  • the user behavior collector 105 may collect execution information in real time through an Android agent.
  • the user behavior collector 105 may collect a GUI action recording result through a JavaScript and the like.
  • the user path sequence generator 207 collects user behavior information (e.g., developer/user behavior collecting) from the user behavior collector 105 and may infer a user’s intention (e.g., behavior sequence factoring) by generating a user path sequence.
  • user behavior information e.g., developer/user behavior collecting
  • intention e.g., behavior sequence factoring
  • the user path sequence generator 207 may generate a user path sequence from the user behavior information collected from the user behavior collector 105.
  • the user path sequence may be “Click buttonl in A page, input a text in B page, and click button3 to move to C page”.
  • the user path sequence generator 207 may extract a user path sequence by using Keras, TensorFlow, and the like since a recurrent neural network (RNN) algorithm is appropriate for execution information that contacts sequential information such as voice, a string, and the like.
  • RNN recurrent neural network
  • the user path sequence generator 207 may extract a user path sequence by using a decision tree supported by scikit- learn in analysis of simple action sequence information.
  • the user path sequence generator 207 may cluster user path sequences by using a machine learning (ML) algorithm.
  • ML machine learning
  • a user’s intention corresponding to the clustering may be labeled.
  • labeling may be carried out by file selection.
  • the user path sequence generator 207 may store the user path sequence in the user path list DB 209. In addition, the user path sequence generator 207 may output the user path sequence to the test sequence generator 211.
  • the test sequence generator 211 may perform path optimization by receiving a default path sequence and a user path sequence. [0099] The test sequence generator 211 may generate a path graph as shown in FIG. 6 by connecting the received default path sequence and user path sequence.
  • the test sequence generator 211 may extract sequences formed of paths that reach an end state from a start state among sequences registered in the path graph. In addition, the test sequence generator 211 may perform optimization on the extracted path sequences. Since only paths required for the test are selected through optimization and generated as a sequence, time consumed for the test can be saved.
  • a path sequence used in the test may be called a test sequence.
  • An example of such an optimization process will be described with reference to FIG. 9.
  • the test sequence generator 211 may extract a sequence formed of an essential path as shown in (C) of FIG. 9 by removing a redundant path (shaded portion) as shown in (B) of FIG. 9 among all the sequences shown in (A) of FIG. 9.
  • the test sequence generator 211 may perform optimization to select sequences that satisfy a predetermined condition.
  • the predetermined condition for optimization is set to select a sequence generated by a combination of the number of combinations that satisfy a threshold condition among combinations formed of the number of elements and the number of paths.
  • the number of elements used for the transition at least once may be set as a threshold condition.
  • the number of paths for a shortest path search may be set as a threshold condition.
  • the path has a log data type as shown in FIG. 10.
  • a path may be formed of an index that indicates a generated order, a current state (or page), a Ul distinguished ID, an action, and a next stage (or page).
  • a path that shows a behavior that, when the user clicks the button2 in the A page, the page is switched to the B page may correspond to the index 2 in FIG. 10.
  • Such paths are sequentially connected to form a path sequence, which is shown in FIG. 11 .
  • the test sequence generator 211 may list the paths shown in FIG. 10 sequentially as shown in (A).
  • the test sequence generator 211 may generate a test sequence (or scenario sequence) that corresponds to Scenario 1 as shown in (B).
  • the generated test sequence is stored in the test sequence DB 213.
  • test sequence generator 211 may perform emulation (e.g., Web/App Emulator) of a test sequence in advance.
  • the test sequence generator 211 may record a test sequence generation log (e.g., Screen shot Test log).
  • a test sequence generation log e.g., Screen shot Test log
  • the test sequence generator 211 generates and provides a query to the user that inquires about whether the user path sequence is used for a test (e.g., a test case selection page), and the user path sequence may be registered in the test sequence DB 213 when the user accepts the application.
  • a test e.g., a test case selection page
  • the test sequence DB 213 may store a developer sequence (e.g., developer/user behavior biased test sequence) and a random sequence (e.g., unbiased general test sequence).
  • the developer sequence is a summary of the test sequence repeatedly input by the developer.
  • the random sequence is a list of test sequences combined by machine learning, regardless of the number of occurrences.
  • the test sequence generator 211 may determine whether a user path sequence is registered in the test sequence DB 213. If not registered, the test sequence generator 211 may add the user path sequence. In this case, the user path sequence is generated based on user behavior information collected before distribution of the application to the public. For example, when a user path sequence is generated based on a developer behavior, the user path sequence may also be called a developer test sequence.
  • the testing engine 215 may request the test agent 107 to execute a test sequence execution registered in the test sequence DB 213 to perform a test of the application 101.
  • the testing engine 215 may execute emulation (e.g., Web/App Emulator) before transmission.
  • emulation e.g., Web/App Emulator
  • the testing engine 215 may transmit a test sequence to the test agent 107 using a test framework such as Selenium or Appium.
  • the testing engine 215 may determine whether the test operation is performed by receiving a test result (e.g., screen shot test results) from the test agent 107 and may determine a test result by collecting error messages generated during the test process. As described, since the testing engine 215 tests based on test sequences registered in the test sequence DB 213 periodically or at predetermined timing, a user path sequence is reflected in the test through such a process.
  • a test result e.g., screen shot test results
  • FIG. 13 is a flowchart of a default path sequence generation process according to an example.
  • the default path sequence generator 203 may collect screen information crawled by a distribution target application in operation S101.
  • the default path sequence generator 203 may extract a plurality of elements from the collected screen information in operation S103.
  • the default path sequence generator 203 may label the meaning of each of the elements by using attribute information of the extracted element or a learning result through the machine learning algorithm in operation S105.
  • the default path sequence generator 203 may generate a connection structure of meaningful elements based on the labeling information and generate default path sequences from the connection structure in operation S107.
  • the connection structure may be in the form of a tree structure or state diagram of the states that are transitioned by the elements.
  • the path sequences generated in operation S107 may form the path graph described with reference to FIG. 6.
  • FIG. 14 is a flowchart of a user path sequence generation process according to an example.
  • the user path sequence generator 207 may collect user behavior information using the distribution target application in operation S201 .
  • the user path sequence generator 207 may extract a meaningful user path sequence from the collected user behavior information in operation S203.
  • the user path sequence generator 207 may cluster the extracted user path sequence using machine learning and may label classification information corresponding to the cluster in operation S205.
  • the classification information may refer to an intention or a purpose of a user path sequence.
  • FIG. 15 is a flowchart of a user path sequence generation process according to an example.
  • the user path sequence generator 207 may collect behavioral information of users (e.g., developers) using the application before distribution at different time points in operation S301.
  • the user path sequence generator 207 may generate a user path sequence in which user paths repeatedly generated from collected behavior information are sequentially connected with each other in operation S303.
  • the user path sequence generator 207 may register labeling information that indicates a user’s intention that corresponds to the user path sequence generated in operation S303 in operation S305. In this case, the labeling information is input from the outside.
  • the user path sequence generator 207 may generate a user path sequence that corresponds to a specific user’s intention as a test sequence by using the labeling information in operation S307.
  • FIG. 16 is a block diagram of a test automation system according to an example.
  • FIG. 16 an example configuration is described. Flowever, in various examples, elements of FIG. 16 may be added to the configuration of FIG. 1. That is, although the elements of FIG. 1 are not illustrated in FIG. 16, the example elements of FIG. 16 may further include the configuration of FIG. 1 . In this case, the same elements as those in FIG. 1 are referred to with the same reference numerals.
  • a user device 100' in a test automation system 1' is a device in which a distributed application 101 is loaded.
  • a user behavior collector 105 may transmit user behavior information collected from the application 101 to a user path sequence generator 207 of a test server 200'.
  • the user path sequence generator 207 may generate a user path sequence from the user behavior information using the same example method described with reference to FIG. 1 to FIG. 15.
  • the user path sequence may be output to a personalizer 217.
  • the personalizer 217 may propose an appropriate path sequence to a user by learning a user’s pattern based on the user path sequence.
  • the personalizer 217 may provide guide information for a user to the guide information application 101.
  • the guide information may propose a preferred sequence that executes a specific function. In this way, malfunctions such as falling into an infinite loop due to a wrong sequence may be prevented.
  • the personalizer 217 may provide guide information in conjunction with an Android agent 109 or in conjunction with a Java script.
  • FIG. 17 is a flowchart of an expansion process of a test sequence according to an example.
  • the personalizer 217 may compare user path sequences received from the user path sequence generator 207 with verified test sequences registered in the test sequence DB 213 in operation S401 and determine whether there is a mismatch in operation S403.
  • the process may be terminated. If a mismatch occurs, the personalizer 217 may register the user path sequences as a verification target in the test sequence DB 213 in operation S405.
  • the user path sequences registered as the verification target are provided to a user such as a developer. In addition, when the developer accepts the user path sequences as a test target, they may be used in the test.
  • FIG. 18 is a flowchart of a process for providing guide information according to an example.
  • the user path sequence generator 207 may collect behavior information of general users from the user behavior collector 105 in operation S501 .
  • the user path sequence generator 207 may extract meaningful user path sequences from the behavior information of the general users in operation S503.
  • the user path sequence generator 207 may cluster the extracted user path sequence using machine learning, and label classification corresponding to the cluster in operation S505. For example, when the user path sequence is “zoom-in -> scan -> zoom-in -> scan”, the user path sequence is labeled with the intention of the user to increase resolution.
  • the personalizer 217 may check similarity between metadata of the user path sequence and metadata of the test sequence registered in the test sequence DB 213 in operation S507.
  • the personalizer 217 can extract the most relevant test sequence into the guide path based on the relationships between elements in the paths that form the sequence in operation S509. Relevance is determined based on metadata, such as an attribute of an element. In this case, the extracted guide path sequence may be “Setting > Scan > Option > Resolution”.
  • the personalizer 217 may provide a guide message including a message such as “High resolution image scanning is available by increasing resolution” together with the extracted guide path sequence in operation S511.
  • FIG. 19 illustrates a personalized menu providing method according to an example
  • FIG. 20 illustrates a personalized menu providing method according to an example.
  • a personalizer 217 may learn a unique pattern of a user to provide an option for selecting a menu structure suitable for the user. That is, the personalizer 217 may provide a recommended menu according to an action sequence analysis of a user.
  • FIG. 19 shows a default menu and (B) shows a menu displayed at the highest level.
  • the menu repeatedly used according to the user's action sequence analysis may be displayed as the highest level.
  • the personalizer 217 may arrange menus in order of frequency of use (e.g., hits) according to analysis of the user’s action sequence.
  • the personalizer 217 may provide a menu tree that is frequently used as an option according to the frequency of use.
  • a user manual can be provided according to a user's intention.
  • a user guide may be generated and distributed in advance and may not reflect a changed state until an editor directly modifies them.
  • the user can accurately generate and distribute a menu configuration suitable for the user characteristic inferred from the user behavior information and a user guide that matches each time in real time.
  • menu initialization may be carried out for returning to the initial menu.
  • FIG. 21 illustrates a method for providing a personalized workflow according to an example.
  • a plurality of blocks A, B, C, D, E, F, G, H, and I are disposed on a screen as shown at (A).
  • Each of the blocks may include at least one of an application, a software component, a function, and the like.
  • the personalizer 217 may extract a workflow of a user by learning a pattern of the user based on a user path sequence provided from a user path sequence generator 207.
  • the workflow may imply sequential arrangement of a plurality of blocks used by the user as shown at (B).
  • the personalizer 217 may sequentially arrange combinations of used blocks as shown at (B) in the order of highest frequency of occurrence.
  • Workflow #1 (WP #1 ) has the highest combination (e.g., 102 occurrences).
  • Workflow #2 (WP #2) has a next highest combination (e.g., 72 occurrences), and workflow #3 (WP #3) has a lower combination (e.g., 56 occurrences).
  • the personalizer 217 may arrange a sequence as workflow #1 (WP #1 ) ® workflow #2 (WP #2) ® workflow #3 (WP #3).
  • the personalizer 217 may require a selection of the user by arranging the workflows while providing guide information to the user and a selected workflow may be distributed.
  • FIG. 22 illustrates a process for generating a tutorial content according to an example.
  • a personalizer 217 may generate tutorial content or a guideline as shown at (B) based on a test sequence collected from a test sequence DB 213 as shown at (A).
  • the personalizer 217 may execute an application 101 based on a test sequence registered in the test sequence DB 213 and may capture an executed screen.
  • motion process captions may be attached to the captured screen.
  • a label assigned to an element or a GUI may be used.
  • the personalizer 217 can generate pages with words and captured screens in a sequence and generate tutorial content with the pages arranged according to a menu structure. Therefore, even if the developer does not produce a separate guideline, a guide service can be provided based on the test scenario.
  • FIG. 23 illustrates a method for providing guide information to a user of an application according to an example.
  • a test server 200 has a different configuration and operation from the configuration and operation respectively shown in FIG. 1 and FIG. 2 according to an example. In FIG. 23, only necessary elements for description of the example are included.
  • a camera may photograph a screen and transmit the captured image to the Ul learning module 201.
  • the Ul learning module 201 may perform Ul learning using a method as described in the various examples of FIG. 1 to FIG. 15.
  • the user path sequence generator 207 may generate a user path sequence according to the Ul manipulation of the user using a method as described in the various examples of FIG. 1 to FIG. 15 based on a Ul learning result of the Ul learning module 201 .
  • the test case DB 213 may store a test case, which is the path sequence generated from a machine learning result of an operation method and process of a machine using a GUI.
  • the personalizer 217 may compare a user path sequence according to Ul manipulation of the user with a pre-stored test case to monitor whether the user path sequence matches the test case.
  • a warning alarm may sound and a record of the exceptions may be stored.
  • the warning alarm may be a screen popup, a warning sound, and the like such as a warning message (e.g., the safety apparatus operates in a manual mode set by a user. It is dangerous because it deviates from normal operation method).
  • the developer can view the history of exceptions and retrain risk/safety on a case-by-case basis.
  • Log records of user path sequences can be retrieved.
  • FIG. 24 is a schematic diagram of a computing device according to an example.
  • a user device 100, 100’ and a test server 200, 200’, 200” as described with reference to FIG. 1 to FIG. 23 may execute a program in which instructions for execution of example operations as described above are included in a computing device 300 operating by at least one processor.
  • hardware of the computing device 300 may include at least one processor 301 , a memory 303, a storage unit 305, and a communication interface 307, which may be connected through a bus.
  • hardware such as an input device and an output device may be included.
  • the computing device 300 can be loaded with a variety of software, including an operating system that can run programs.
  • the processor 301 may be a processor of various types for processing instructions included in a program, and may be, for example, a central processing unit (CPU), a microprocessor unit (MPU), a microcontroller unit (MCU), a graphics processing unit (GPU), and the like.
  • the memory 303 may load a corresponding program such that instructions to execute an example operation as described above are processed by the processor 301.
  • the memory 303 may be, for example, a read only memory (ROM), a random access memory (RAM), and the like.
  • the storage unit 305 may store various data, programs, etc. required to execute an operation of the present invention.
  • the communication interface 307 may be a wired/wireless communication module.
  • test coverage can be continuously extended by comparing the developer's user scenarios and intentions.
  • test validation in a personalized domain that was previously difficult is easier even after the product is released.
  • the user's exceptional behavior can be detected, and the user can be guided with correct behaviors.
  • user-based personalized menus, personalized workflow design, and testing are possible.

Abstract

A test automation system of an application is disclosed. An example system generates a first path that indicates information on state transistor of an application switched by a first user's behavior based on behavior information of the first user using the application, generates a second path that indicates transition between random application states based on screen information crawled in the application, and generates at least one test scenario in which a plurality of paths are sequentially connected, by using the first path and the second path.

Description

TEST AUTOMATION OF APPLICATION
BACKGROUND
[0001] In general, when an application is developed, a test is needed to determine whether there is a problem with the developed application. Flowever, testing the various behaviors of an application by a user not only requires time to execute the test, but can also be expensive.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] Various examples will be described below by referring to the following figures.
[0003] FIG. 1 is a block diagram of a test automation system according to an example.
[0004] FIG. 2 shows a configuration that generates a default path (Full- path) sequence according an example.
[0005] FIG. 3 illustrates element extraction from screen information collected in the form of extensible markup language (XML) according to an example.
[0006] FIG. 4 illustrates path generation according to an example.
[0007] FIG. 5 illustrates path sequence generation according to an example.
[0008] FIG. 6 illustrates a path graph according to an example.
[0009] FIG. 7 shows a configuration that generates a user path (Fluman generated path) sequence according to an example.
[0010] FIG. 8 illustrates generation of a test sequence according to an example.
[0011] FIG. 9 illustrates path sequence optimization according to an example.
[0012] FIG. 10 illustrates log data, which is a base of path generation, according to an example. [0013] FIG. 11 illustrates scenario sequence generation of an application according to an example.
[0014] FIG. 12 illustrates a configuration for execution of a test using a test sequence according to an example.
[0015] FIG. 13 is a flowchart of a default path sequence generation process according to an example.
[0016] FIG. 14 is a flowchart of a user path sequence generation process according to an example.
[0017] FIG. 15 is a flowchart of a user path sequence generation process according to an example.
[0018] FIG. 16 is a block diagram of a test automation system according to an example.
[0019] FIG. 17 is a flowchart of an expansion process of a test sequence according to an example.
[0020] FIG. 18 is a flowchart of a process for providing guide information according to an example.
[0021] FIG. 19 illustrates a personalized menu providing method according to an example.
[0022] FIG. 20 illustrates a personalized menu providing method according to an example.
[0023] FIG. 21 illustrates a method for providing a personalized workflow according to an example.
[0024] FIG. 22 illustrates a process for generating a tutorial content according to an example.
[0025] FIG. 23 illustrates a method for providing guide information to a user or an application user according to an example.
[0026] FIG. 24 is a schematic diagram of a computing device according to an example.
DETAILED DESCRIPTION
[0027] In the following description, examples are shown and described. As those skilled in the art would realize, the examples may be modified in various different ways, all without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature and not restrictive.
[0028] Unless explicitly described to the contrary, the word "comprise” and variations such as “comprises” or “comprising” will be understood to imply the inclusion of stated elements but not the exclusion of any other elements.
[0029] In addition, terms such as “-er”, “-or”, and “module” described in the specification refer to units for processing at least one function or operation, and can be implemented by hardware components or software components and combinations thereof.
[0030] A server and a device may be composed of hardware including at least one processor, a memory, a communication apparatus, etc., and a program executed in combination with hardware is stored in a designated location. The hardware may have a configuration and performance to implement an example method as described below. The program may include instructions that implement an example method of operation as described with reference to the drawings and is executed in combination with hardware such as a processor and a memory. [0031] In the following description, “transmission or provision” may include not only direct transmission or provision, but also indirect transmission or provision through other paths or through other devices. In the following description, an expression recited in the singular may be construed as singular or plural unless the expression “one”, “single”, etc. is use.
[0032] In the following description, the same drawing numbers may refer to the same constituent elements and a redundant description may be avoided for brevity. The term “and/or” includes all combinations of each and at least one of the constituent elements mentioned.
[0033] Test automation technology for an application can generate a test scenario by determining and learning a test path without any developer intervention. However, in the case of generating all possible test scenarios mechanically from crawled screen information, the test scenario becomes too large and the test automation time becomes long. [0034] In addition, test scenarios created without developer intervention may not grasp the intention of the developer, and thus, sophisticated verification cannot be performed. In addition, because it covers all possible scenarios, no specialized verification can be performed for users who use intuitive, function- driven scenarios. Accordingly, no exceptions can be verified.
[0035] A manual test performed by a developer has small coverage of the test scenario, and thus the verification level of the application is low. In addition, the manual test performed by the developer may cause a problem in that an unverified path may be used for the test due to an unintended method of a user. [0036] According to an example, a test automation system that generates a test scenario to which an inferred user’s intention is reflected by using user behavior information, and uses the generated test scenario in a test of an application, and a method thereof, can be provided.
[0037] A described example may be one of typical test activities. Such a typical test activity may include a functional test, a performance test, a user interface (Ul) & usability test, a compatibility test, an application programming interface (API) integration test, a data test, and the like. Among these, the Ul & usability test may include an operational acceptance test, a design acceptance test, and a data test. The operational acceptance test may include a Ul test. The design test may include a visual test and a usability test. The data test may test the connection uniform resource link (URL) of data such as “Image/text content test”.
[0038] A test target is an application that provides a graphical user interface (GUI) and may include a web page displayed in a dedicated application or a browser.
[0039] The GUI may be formed of a plurality of pages. Each page may include a plurality of elements. Each element may include a visual unit constituting a page and may be set to interact with a user. The element may include text information and/or visual information. For example, the visual information may include a button, a check-box, and the like.
[0040] When a user action (for example, a click) specified in the element is input, a current page may be switched to a next page, or a current state may be switched to a next state. For example, when buttonl is clicked in an A page, the current page may be switched to a B page.
[0041] The application may be tested by at least one test scenario. The test scenario may be a sequence with some purpose. For example, when an application installed in a printer is tested, a test scenario for sending a fax may be a sequence of connected elements that must pass sequentially from the start state to the fax send state. In the following description, a path sequence may be formed of sequentially connected paths. In the following description, one path may be defined as an activity derived by using an action as an input in a current state. For example, the path may be described as “A page : ID=button2 : click ® B page”. Flere, “A page” indicates a current state, “ID=button2” indicates an element, “click” indicates an action, and “® B page” indicates an operation. That is, a path is defined as “when button2 is clicked in the current state in which “A page” is displayed, the page is switched to “B page”. A path sequence for reaching a D page from the A page may be formed of, for example, three paths as shown in Table 1 .
Table 1
Figure imgf000007_0001
[0042] hereinafter, test automation of an application according to an example will be described with reference to the accompanying drawings.
[0043] FIG. 1 is a block diagram of a test automation system according to an example, FIG. 2 shows a configuration that generates a default path (Full- path) sequence according to an example, FIG. 3 illustrates element extraction from screen information collected in the form of extensible markup language (XML) according to an example, FIG. 4 illustrates path generation according to an example, FIG. 5 illustrates path sequence generation according to an example, FIG. 6 illustrates a path graph according to an example, FIG. 7 shows a configuration that generates a user path (Human-generated path) sequence according to an example, FIG. 8 illustrates generation of a test sequence according to an example, FIG. 9 illustrates path sequence optimization according to an example, FIG. 10 illustrates log data, which is a base of path generation according to an example, FIG. 11 illustrates scenario sequence generation of an application according to an example, and FIG. 12 illustrates a configuration for execution of a test using a test sequence according to an example.
[0044] Referring to FIG. 1 , a test automation system 1 may include a user device 100 and a test server 200. The user device 100 and the test server 200 may each be a computing device, each including a memory and at least one processor. The processor is to execute instructions of a program loaded into the memory. The user device 100 may be provided in plural, but it will be described as a single user device in the following description for sake of convenience. [0045] The user device 100 refers to a device that loads an application, which is a test target, and tests the application by using a test scenario (i.e. , a test case) provided by the test server 200. The user device 100 may be a mobile terminal, a laptop, a tablet device, and the like. In an example, it is assumed that the application is tested by a developer to determine a normal operation before public distribution. However, in other examples, the application may be publicly distributed to general users and may be a test target as necessary even after the distribution.
[0046] The test server 200 generates a test case based on information collected from the user device 100 and may continuously update the data by expanding or modifying the information. In addition, the test scenario generated by the test server 200 may be used in testing of the application.
[0047] The user device 100 may include an application 101 , a screen crawling engine 103, a user behavior collector 105, and a test agent 107.
[0048] The test server 200 may include a Ul learning module 201 , a default path sequence generator 203, a default path list database (DB) 205, a user path sequence generator 207, a user path list DB 209, a test sequence generator 211 , a test sequence DB 213, and a testing engine 215.
[0049] The application 101 is a test target. The application 101 may be an application residing in the user device 100 or a web application that is downloaded and executed while connected to the Internet.
[0050] The screen crawling engine 103 may be able to collect screen information (e.g., GUI images, screenshots, etc.) as page units while the application 101 is being executed.
[0051] According to an example, when the application 101 is an Android application, the screen crawling engine 103 may collect screen information using a “Ul automator” and a test framework provided by the mobile OS.
[0052] According to an example, when the application 101 is a web application, the screen crawling engine 103 may collect screen information by using “WebDriver” or “Selenium”.
[0053] The screen crawling engine 103 may transmit collected screen information to the Ul learning module 201 .
[0054] In this case, the application 101 may execute (e.g., click) all elements on the GUI image displayed on the screen according to a random command. In addition, the screen crawling engine 103 may collect a GUI image and page information according to execution of the element and transmit the collected image and information to the Ul learning module 201.
[0055] In an example, the screen crawling engine 103 may click any element arbitrarily. In another example, the screen crawling engine 103 may select and click an element set in advance by the user.
[0056] Referring to FIG. 2, the Ul learning module 201 may receive screen information (e.g., screenshots), guide information (e.g., human-guided input), or user behavior inference information (e.g., developer behavior input & intention labels).
[0057] The Ul learning module 201 may collect screen information (e.g., GUI images, page information, etc.) from the screen crawling engine 103. The Ul learning module 201 may collect screen information in real time when the application 101 is executed. For example, the screen information may be collected by using AppView framework of the user device 100 where the application 101 is executed.
[0058] The Ul learning module 201 may receive guide information from an external source. In case of a specific input rather than a general input, arbitrary random tests incur a long time delay. The human-guided input for preparing such a case may imply a test input value generated when the developer pre-tests or a manually set value.
[0059] The Ul learning module 201 may receive the developer behavior input & intention labels from an external source. This may imply information labeled with respect to a specific behavior by a user.
[0060] The Ul learning module 201 may set machine learning data based on the input information. The Ul learning module 201 may perform a random test that randomly requires the screen crawling engine 103 to execute a screen of a crawling target.
[0061] The screen information collected by the Ul learning module 201 may include source codes (e.g., XML). For example, the screen information collected from a web page by the Ul learning module 201 may be as shown in Table 2.
Table 2
Figure imgf000010_0002
[0062] The Ul learning module 201 may extract (e.g., visually recognize) elements from the screen information. The Ul learning module 201 may learn (e.g., Ul learning) the extracted elements. In this case, the elements used when the Ul learning module 201 is learning may be as shown in Table 3.
Table 3
Figure imgf000010_0001
Figure imgf000011_0001
[0063] An example process for the Ul learning module 201 to extract elements is shown in FIG. 3.
[0064] Referring to FIG. 3, a login page P10 crawled by the screen crawling engine 103 may include an E-mail address input field P11 , a password input field P13, and an authorize button P15. In this case, source codes that represent the respective fields P11 , P13, and P15 are displayed.
[0065] The Ul learning module 201 may extract elements from the source codes. For example, the Ul learning module 201 may extract elements such as “E-mail Address”, “clickable”, “enabled”, “focusable”, and the like from the source codes that represent the E-mail address input field P11. The Ul learning module 201 may extract elements such as 'resource-id', 'password', 'android. widget.EditText', 'clickable', and the like from the source codes that represent the Password input field P13. The Ul learning module 201 may extract elements such as 'Authorize', 'resource-id', 'android. widget. Button', 'clickable', 'focusable', and the like from the source codes that represent the Authorize button P15.
[0066] The Ul learning module 201 may extract types and formats of elements from GUI image source codes that represent the GUI images. In addition, the Ul learning module 201 may extract an element bitmap image based on a coordinate value of an element known in advance. Keras and TensorFlow, which are deep learning libraries that support a convolutional neural network (CNN) method, may be used to determine the meaning of an element image. A random forest algorithm of Natural Language Toolkit (NLTK) and scikit-learn may be utilized for classification of text information extracted from the element image. [0067] The Ul learning module 201 may use the extracted elements as training data of a machine learning model. The Ul learning module 201 may infer (i.e. , auto labeling) the intention of screen information by learning the extracted elements. For example, if the extracted elements are 'login', 'password', 'authorize', etc., the meaning and purpose of these elements can be derived. Based on the meaning and purpose of the elements, the intent of the screen on which these elements are displayed can be inferred as a login page.
[0068] The Ul learning module 201 may extract a plurality of elements constituting a plurality of pieces of screen information from the plurality of screenshots constituting the application. The extracted elements can be learned. [0069] The Ul learning module 201 may infer the meaning of the extracted element by determining a text attribute and an object of the extracted element and may assign a label based on an inference result. For example, a shopping cart may be labeled with a shopping cart-shaped bitmap. In this case, the meaning may be inferred by using machine learning on an element whose meaning cannot be inferred for the attribute and the purpose.
[0070] As such, Ul learning is a process of inferring and labeling the meaning of elements included in the collected screenshot. The Ul learning module 201 may output the learning result and inferred intention of the elements, that is, the Ul learning result to the default path sequence generator 203.
[0071] The default path sequence generator 203 may generate a path sequence with elements that are sequentially connected using the Ul learning result received from the Ul learning module 201. In this case, since the path sequence is generated for all pages constituting the application, it may be referred to as a default path sequence. The default path sequence is created automatically by a program without user action, so it is used as the default.
[0072] The default path sequence generator 203 may search a menu structure using elements and generate a sequence based on the menu structure. Flere, when an element in the GUI image is clicked, paths for moving to another GUI page are continuously connected by the path sequence.
[0073] Referring to FIG. 4, an example is illustrated in which there are three buttons in the A page, which is a GUI page of the current state. When a user clicks the buttonl in the A page, the A page is maintained. That is, no conversion occurs to another page.
[0074] When the user clicks button2 in the A page, the page is switched to the B page.
[0075] When the user clicks button3 in the A page, the page is switched to the F page.
[0076] In the drawing, pages are illustrated, but the page may be expressed as a state.
[0077] When a page or a state is referred to as a node, a transition in the same node or a transition between two nodes may be defined as a state transition, and the state transition represents a path.
[0078] In this case, a path that moves to another page when an element in the page is clicked may be referred to as a traverse path. However, a path that moves to another page when an element in the page is clicked is simply referred to as a path hereinafter.
[0079] When such an individual path is sequentially connected, a path sequence is formed.
[0080] An example path sequence will be described with reference to FIG. 5.
[0081] Referring to FIG. 5, there are three buttons in the A page, which is a GUI page of a current state. When the user clicks the button2 in the A page, the page is switched to the B page (CD). When the user clicks buttonl in the B page, the page is switched to the C page ((2)). When the user edits the text AB_ in the C page, the C page is maintained ((3)). When the user clicks the buttonl in the C page, the page is switched to the D page (©). A message (e.g., Congratulations) is displayed in the D page and three buttons are provided. In this case, when each button is clicked in the D page, as previously described with reference to FIG. 4, the page is switched to the next page corresponding to the button or the current page is maintained.
[0082] Each operation is called a path, and a path sequence is formed by sequentially connecting the respective paths. That is, when the A page is considered an initial page and the D page is considered a destination page, a path sequence for execution of a random menu function is(J)®(2)®(3)®@.
[0083] The default path sequence generator 203 can generate all possible sequences using the elements. The default path sequence generator 203 may generate a path sequence using a tree search algorithm, a model-based testing method, and the like.
[0084] As previously described with reference to FIG. 4, the default path sequence generator 203 may generate a path sequence. Referring to FIG. 6, the default path sequence generator 203 may generate a path sequence as shown in (A) by connecting each state or a state transition in the page, or a state transition between different states or pages. In addition, the generated path sequence may be stored in the default path list DB 205.
[0085] When such a path sequence is connected, a path graph may be generated as shown in (B) of FIG. 6. The path graph may be a set of sequences of which paths are continuously connected.
[0086] The path graph may be divided into a default path sequence and a user path sequence. In such an example method, the default path sequence generator 203 generates all the possible paths in the application 101 and may generate sequences by continuously connecting the paths. In this case, the generated path sequences are generated based on the screen information crawled in the application 101 and may be defined as a default sequence.
[0087] As described, the path graph formed of the default path sequences may be expanded by adding user path sequences. Thus, not only automatically generated default path sequences but also user path sequences generated by intuitive Ul manipulation are added, thereby improving accuracy of test validity. [0088] The user path sequence may be generated by the user path sequence generator 207, an example of which will now be described.
[0089] The user behavior collector 105 of the user device 100 collects execution information of the application 101 , and may transmit the collected execution information to the user path sequence generator 207. The user behavior collector 105 may operate in the background under the operating system of the user device 100.
[0090] According to an example, when the application 101 is an application, the user behavior collector 105 may collect execution information in real time through an Android agent. [0091] According to an example, when the application 101 is a web application, the user behavior collector 105 may collect a GUI action recording result through a JavaScript and the like.
[0092] Referring to FIG. 7, the user path sequence generator 207 collects user behavior information (e.g., developer/user behavior collecting) from the user behavior collector 105 and may infer a user’s intention (e.g., behavior sequence factoring) by generating a user path sequence.
[0093] The user path sequence generator 207 may generate a user path sequence from the user behavior information collected from the user behavior collector 105. The user path sequence may be “Click buttonl in A page, input a text in B page, and click button3 to move to C page”.
[0094] According to an example, the user path sequence generator 207 may extract a user path sequence by using Keras, TensorFlow, and the like since a recurrent neural network (RNN) algorithm is appropriate for execution information that contacts sequential information such as voice, a string, and the like.
[0095] According to an example, the user path sequence generator 207 may extract a user path sequence by using a decision tree supported by scikit- learn in analysis of simple action sequence information.
[0096] The user path sequence generator 207 may cluster user path sequences by using a machine learning (ML) algorithm. In addition, a user’s intention corresponding to the clustering may be labeled. For example, in case of a user path sequence according to a behavior such as upload, download, storing, and the like, labeling may be carried out by file selection.
[0097] The user path sequence generator 207 may store the user path sequence in the user path list DB 209. In addition, the user path sequence generator 207 may output the user path sequence to the test sequence generator 211.
[0098] Referring to FIG. 8, the test sequence generator 211 may perform path optimization by receiving a default path sequence and a user path sequence. [0099] The test sequence generator 211 may generate a path graph as shown in FIG. 6 by connecting the received default path sequence and user path sequence.
[00100] The test sequence generator 211 may extract sequences formed of paths that reach an end state from a start state among sequences registered in the path graph. In addition, the test sequence generator 211 may perform optimization on the extracted path sequences. Since only paths required for the test are selected through optimization and generated as a sequence, time consumed for the test can be saved.
[00101] As described, a path sequence used in the test may be called a test sequence. An example of such an optimization process will be described with reference to FIG. 9.
[00102] Referring to FIG. 9, circles indicate pages (or states). The test sequence generator 211 may extract a sequence formed of an essential path as shown in (C) of FIG. 9 by removing a redundant path (shaded portion) as shown in (B) of FIG. 9 among all the sequences shown in (A) of FIG. 9.
[00103] The test sequence generator 211 may perform optimization to select sequences that satisfy a predetermined condition. Flere, the predetermined condition for optimization is set to select a sequence generated by a combination of the number of combinations that satisfy a threshold condition among combinations formed of the number of elements and the number of paths. In an example, the number of elements used for the transition at least once may be set as a threshold condition. In another example, the number of paths for a shortest path search may be set as a threshold condition. Flere, the path has a log data type as shown in FIG. 10.
[00104] Referring to FIG. 10, a path may be formed of an index that indicates a generated order, a current state (or page), a Ul distinguished ID, an action, and a next stage (or page).
[00105] For example, a path that shows a behavior that, when the user clicks the button2 in the A page, the page is switched to the B page may correspond to the index 2 in FIG. 10.
[00106] Such paths are sequentially connected to form a path sequence, which is shown in FIG. 11 .
[00107] Referring to FIG. 11 , the test sequence generator 211 may list the paths shown in FIG. 10 sequentially as shown in (A). The test sequence generator 211 may generate a test sequence (or scenario sequence) that corresponds to Scenario 1 as shown in (B). The generated test sequence is stored in the test sequence DB 213.
[00108] It is not easy for machine learning to grasp an exact purpose of an application. When a message is displayed for requirements of the developer or user, the success or failure of the test may be set in advance. Referring again to FIG. 8, the test sequence generator 211 may perform emulation (e.g., Web/App Emulator) of a test sequence in advance.
[00109] As shown in FIG. 8, the test sequence generator 211 may record a test sequence generation log (e.g., Screen shot Test log).
[00110] As shown in FIG. 8, the test sequence generator 211 generates and provides a query to the user that inquires about whether the user path sequence is used for a test (e.g., a test case selection page), and the user path sequence may be registered in the test sequence DB 213 when the user accepts the application.
[00111] The test sequence DB 213 may store a developer sequence (e.g., developer/user behavior biased test sequence) and a random sequence (e.g., unbiased general test sequence). The developer sequence is a summary of the test sequence repeatedly input by the developer. The random sequence is a list of test sequences combined by machine learning, regardless of the number of occurrences.
[00112] The test sequence generator 211 may determine whether a user path sequence is registered in the test sequence DB 213. If not registered, the test sequence generator 211 may add the user path sequence. In this case, the user path sequence is generated based on user behavior information collected before distribution of the application to the public. For example, when a user path sequence is generated based on a developer behavior, the user path sequence may also be called a developer test sequence.
[00113] Referring to FIG. 12, the testing engine 215 may request the test agent 107 to execute a test sequence execution registered in the test sequence DB 213 to perform a test of the application 101. In this case, the testing engine 215 may execute emulation (e.g., Web/App Emulator) before transmission. [00114] The testing engine 215 may transmit a test sequence to the test agent 107 using a test framework such as Selenium or Appium.
[00115] The testing engine 215 may determine whether the test operation is performed by receiving a test result (e.g., screen shot test results) from the test agent 107 and may determine a test result by collecting error messages generated during the test process. As described, since the testing engine 215 tests based on test sequences registered in the test sequence DB 213 periodically or at predetermined timing, a user path sequence is reflected in the test through such a process.
[00116] Examples of the above-described operation of the test automation system will now be described.
[00117] FIG. 13 is a flowchart of a default path sequence generation process according to an example.
[00118] Referring to FIG. 13, the default path sequence generator 203 may collect screen information crawled by a distribution target application in operation S101.
[00119] The default path sequence generator 203 may extract a plurality of elements from the collected screen information in operation S103.
[00120] The default path sequence generator 203 may label the meaning of each of the elements by using attribute information of the extracted element or a learning result through the machine learning algorithm in operation S105.
[00121] The default path sequence generator 203 may generate a connection structure of meaningful elements based on the labeling information and generate default path sequences from the connection structure in operation S107. The connection structure may be in the form of a tree structure or state diagram of the states that are transitioned by the elements. In this case, the path sequences generated in operation S107 may form the path graph described with reference to FIG. 6.
[00122] Among the path sequences generated in operation S107, sequences that are connected from the start state to the end state may be generated as at least one test sequence. [00123] FIG. 14 is a flowchart of a user path sequence generation process according to an example.
[00124] Referring to FIG. 14, the user path sequence generator 207 may collect user behavior information using the distribution target application in operation S201 .
[00125] The user path sequence generator 207 may extract a meaningful user path sequence from the collected user behavior information in operation S203.
[00126] The user path sequence generator 207 may cluster the extracted user path sequence using machine learning and may label classification information corresponding to the cluster in operation S205. The classification information may refer to an intention or a purpose of a user path sequence. [00127] FIG. 15 is a flowchart of a user path sequence generation process according to an example.
[00128] Referring to FIG. 15, the user path sequence generator 207 may collect behavioral information of users (e.g., developers) using the application before distribution at different time points in operation S301. The user path sequence generator 207 may generate a user path sequence in which user paths repeatedly generated from collected behavior information are sequentially connected with each other in operation S303.
[00129] The user path sequence generator 207 may register labeling information that indicates a user’s intention that corresponds to the user path sequence generated in operation S303 in operation S305. In this case, the labeling information is input from the outside.
[00130] The user path sequence generator 207 may generate a user path sequence that corresponds to a specific user’s intention as a test sequence by using the labeling information in operation S307.
[00131] FIG. 16 is a block diagram of a test automation system according to an example.
[00132] Referring to FIG. 16, an example configuration is described. Flowever, in various examples, elements of FIG. 16 may be added to the configuration of FIG. 1. That is, although the elements of FIG. 1 are not illustrated in FIG. 16, the example elements of FIG. 16 may further include the configuration of FIG. 1 . In this case, the same elements as those in FIG. 1 are referred to with the same reference numerals.
[00133] Referring to FIG. 16, a user device 100' in a test automation system 1' is a device in which a distributed application 101 is loaded. A user behavior collector 105 may transmit user behavior information collected from the application 101 to a user path sequence generator 207 of a test server 200'. [00134] The user path sequence generator 207 may generate a user path sequence from the user behavior information using the same example method described with reference to FIG. 1 to FIG. 15. In addition, the user path sequence may be output to a personalizer 217.
[00135] The personalizer 217 may propose an appropriate path sequence to a user by learning a user’s pattern based on the user path sequence.
[00136] The personalizer 217 may provide guide information for a user to the guide information application 101. The guide information may propose a preferred sequence that executes a specific function. In this way, malfunctions such as falling into an infinite loop due to a wrong sequence may be prevented. [00137] The personalizer 217 may provide guide information in conjunction with an Android agent 109 or in conjunction with a Java script.
[00138] FIG. 17 is a flowchart of an expansion process of a test sequence according to an example.
[00139] Referring to FIG. 17, the personalizer 217 may compare user path sequences received from the user path sequence generator 207 with verified test sequences registered in the test sequence DB 213 in operation S401 and determine whether there is a mismatch in operation S403.
[00140] If there is a match, the process may be terminated. If a mismatch occurs, the personalizer 217 may register the user path sequences as a verification target in the test sequence DB 213 in operation S405. The user path sequences registered as the verification target are provided to a user such as a developer. In addition, when the developer accepts the user path sequences as a test target, they may be used in the test.
[00141] FIG. 18 is a flowchart of a process for providing guide information according to an example.
[00142] Referring to FIG. 18, the user path sequence generator 207 may collect behavior information of general users from the user behavior collector 105 in operation S501 .
[00143] The user path sequence generator 207 may extract meaningful user path sequences from the behavior information of the general users in operation S503.
[00144] The user path sequence generator 207 may cluster the extracted user path sequence using machine learning, and label classification corresponding to the cluster in operation S505. For example, when the user path sequence is “zoom-in -> scan -> zoom-in -> scan”, the user path sequence is labeled with the intention of the user to increase resolution.
[00145] The personalizer 217 may check similarity between metadata of the user path sequence and metadata of the test sequence registered in the test sequence DB 213 in operation S507.
[00146] The personalizer 217 can extract the most relevant test sequence into the guide path based on the relationships between elements in the paths that form the sequence in operation S509. Relevance is determined based on metadata, such as an attribute of an element. In this case, the extracted guide path sequence may be “Setting > Scan > Option > Resolution”.
[00147] The personalizer 217 may provide a guide message including a message such as “High resolution image scanning is available by increasing resolution” together with the extracted guide path sequence in operation S511. [00148] FIG. 19 illustrates a personalized menu providing method according to an example, and FIG. 20 illustrates a personalized menu providing method according to an example.
[00149] Referring to FIG. 19, a personalizer 217 may learn a unique pattern of a user to provide an option for selecting a menu structure suitable for the user. That is, the personalizer 217 may provide a recommended menu according to an action sequence analysis of a user.
[00150] In FIG. 19, (A) shows a default menu and (B) shows a menu displayed at the highest level. The menu repeatedly used according to the user's action sequence analysis may be displayed as the highest level.
[00151] Referring to FIG. 20, the personalizer 217 may arrange menus in order of frequency of use (e.g., hits) according to analysis of the user’s action sequence.
[00152] As such, the personalizer 217 may provide a menu tree that is frequently used as an option according to the frequency of use. For example, a user manual can be provided according to a user's intention. Conventionally, a user guide may be generated and distributed in advance and may not reflect a changed state until an editor directly modifies them. Flowever, in various examples, the user can accurately generate and distribute a menu configuration suitable for the user characteristic inferred from the user behavior information and a user guide that matches each time in real time.
[00153] In addition, the menu initialization may be carried out for returning to the initial menu.
[00154] FIG. 21 illustrates a method for providing a personalized workflow according to an example.
[00155] Referring to FIG. 21 , a plurality of blocks A, B, C, D, E, F, G, H, and I are disposed on a screen as shown at (A). Each of the blocks may include at least one of an application, a software component, a function, and the like. [00156] The personalizer 217 may extract a workflow of a user by learning a pattern of the user based on a user path sequence provided from a user path sequence generator 207. The workflow may imply sequential arrangement of a plurality of blocks used by the user as shown at (B).
[00157] The personalizer 217 may sequentially arrange combinations of used blocks as shown at (B) in the order of highest frequency of occurrence. [00158] For example, Workflow #1 (WP #1 ) has the highest combination (e.g., 102 occurrences). Workflow #2 (WP #2) has a next highest combination (e.g., 72 occurrences), and workflow #3 (WP #3) has a lower combination (e.g., 56 occurrences). Thus, the personalizer 217 may arrange a sequence as workflow #1 (WP #1 ) ® workflow #2 (WP #2) ® workflow #3 (WP #3).
[00159] As shown at (C), the personalizer 217 may require a selection of the user by arranging the workflows while providing guide information to the user and a selected workflow may be distributed.
[00160] Workflows composed of combinations of blocks are so numerous that pre-validation is nearly impossible. However, workflows that are optimized for user’s patterns can be generated and provided after verifying the workflow through a test.
[00161] Therefore, even if a function frequently used by a user is disposed by a developer, it may be reorganized into a configuration with many usages of the user.
[00162] In addition, a sequence that is frequently used by the user is proposed as a single workflow, and distributed blocks (components, functions, apps, etc.) can be connected and used organically upon a user’s selection. [00163] In addition, it is possible to provide a highly integrated workflow service by organically tying object types commonly used in object-oriented languages or development environments.
[00164] FIG. 22 illustrates a process for generating a tutorial content according to an example.
[00165] Referring to FIG. 22, a personalizer 217 may generate tutorial content or a guideline as shown at (B) based on a test sequence collected from a test sequence DB 213 as shown at (A).
[00166] The personalizer 217 may execute an application 101 based on a test sequence registered in the test sequence DB 213 and may capture an executed screen. In addition, motion process captions may be attached to the captured screen. As the motion process caption, a label assigned to an element or a GUI may be used. The personalizer 217 can generate pages with words and captured screens in a sequence and generate tutorial content with the pages arranged according to a menu structure. Therefore, even if the developer does not produce a separate guideline, a guide service can be provided based on the test scenario.
[00167] FIG. 23 illustrates a method for providing guide information to a user of an application according to an example.
[00168] Referring to FIG. 23, a test server 200” has a different configuration and operation from the configuration and operation respectively shown in FIG. 1 and FIG. 2 according to an example. In FIG. 23, only necessary elements for description of the example are included.
[00169] When a user manipulates a machine using a GUI, a camera may photograph a screen and transmit the captured image to the Ul learning module 201. The Ul learning module 201 may perform Ul learning using a method as described in the various examples of FIG. 1 to FIG. 15. The user path sequence generator 207 may generate a user path sequence according to the Ul manipulation of the user using a method as described in the various examples of FIG. 1 to FIG. 15 based on a Ul learning result of the Ul learning module 201 . [00170] The test case DB 213 may store a test case, which is the path sequence generated from a machine learning result of an operation method and process of a machine using a GUI.
[00171] The personalizer 217 may compare a user path sequence according to Ul manipulation of the user with a pre-stored test case to monitor whether the user path sequence matches the test case. When the user path sequence does not match a learned test case, a warning alarm may sound and a record of the exceptions may be stored. The warning alarm may be a screen popup, a warning sound, and the like such as a warning message (e.g., the safety apparatus operates in a manual mode set by a user. It is dangerous because it deviates from normal operation method).
[00172] In addition, the developer can view the history of exceptions and retrain risk/safety on a case-by-case basis. Log records of user path sequences can be retrieved.
[00173] FIG. 24 is a schematic diagram of a computing device according to an example.
[00174] A user device 100, 100’ and a test server 200, 200’, 200” as described with reference to FIG. 1 to FIG. 23 may execute a program in which instructions for execution of example operations as described above are included in a computing device 300 operating by at least one processor.
[00175] Referring to FIG. 24, hardware of the computing device 300 may include at least one processor 301 , a memory 303, a storage unit 305, and a communication interface 307, which may be connected through a bus. In addition, hardware such as an input device and an output device may be included. The computing device 300 can be loaded with a variety of software, including an operating system that can run programs.
[00176] As a device for controlling an operation of the computing device 300, the processor 301 may be a processor of various types for processing instructions included in a program, and may be, for example, a central processing unit (CPU), a microprocessor unit (MPU), a microcontroller unit (MCU), a graphics processing unit (GPU), and the like. The memory 303 may load a corresponding program such that instructions to execute an example operation as described above are processed by the processor 301. The memory 303 may be, for example, a read only memory (ROM), a random access memory (RAM), and the like. The storage unit 305 may store various data, programs, etc. required to execute an operation of the present invention. The communication interface 307 may be a wired/wireless communication module.
[00177] According to the above-described examples, test coverage can be continuously extended by comparing the developer's user scenarios and intentions. As a result, test validation in a personalized domain that was previously difficult is easier even after the product is released.
[00178] The user's exceptional behavior can be detected, and the user can be guided with correct behaviors. In addition, user-based personalized menus, personalized workflow design, and testing are possible.
[00179] Various examples as described above are not only implemented by an apparatus and a method, but may be implemented by a program or a non- transitory recording medium having the program recorded therein that realizes functions corresponding to the configurations of the examples.
[00180] While various examples have been described, it is to be understood that the invention is not limited to the disclosed examples. On the contrary, it is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims

WHAT IS CLAIMED IS:
1. A computing device comprising: a memory; and at least one processor to execute instructions of a program loaded in the memory, wherein the program comprises instructions to: generate a first path that indicates state transition information of an application switched by a user’s behavior based on behavior information of the user using the application, collected from the application; extract a plurality of elements that form screen information crawled in the application, and generate a second path that indicates state transition information of an application switched by each of the plurality of elements; and generate at least one test scenario in which a plurality of paths are sequentially connected by using the first path and the second path.
2. The computing device of claim 1, wherein the instructions to generate the first path comprise instructions to: extract elements where the user’s behavior has occurred from the behavior information; label a meaning of each of the extracted elements by using attribute information of the extracted elements or a machine learning algorithm; and generate a connection structure of the elements based on the labeled meanings and generate the first path from the connection structure
3. The computing device of claim 1, wherein the instructions to generate the second path comprise instructions to: extract elements set to interact with the user among the plurality of elements; classify the extracted elements into a plurality of clusters by training using a deep learning model; label a meaning of an element that corresponds to each of the classified clusters by using attribute information of the element or a machine learning algorithm; and generate a connection structure of the elements based on the labeled meanings and generate the second path from the connection structure.
4. The computing device of claim 1, wherein the instructions to generate the test scenario comprise instructions by which a repeated path sequence that is repeatedly generated in collected behavior information is extracted, a purpose of the repeated path sequence is labeled, and the repeated path sequence having the purpose labeled is generated as a test scenario.
5. The computing device of claim 1, wherein the instructions to generate the test scenario comprise instructions to: generate a path graph that indicates a set of sequences where the first path and the second path are sequentially connected; and generate path sequences that are connected from at least one start state to at least one end state as at least one test scenario based on the path graph.
6. The computing device of claim 5, wherein the instructions to generate the path graph comprise instructions to generate the path graph by optimizing sequential connection of the first path and the second path.
7. The computing device of claim 5, wherein the program further comprises instructions to: collect general user behavior information from at least one terminal where the application is loaded; generate a third path that indicates state transition information of the application based on the general user behavior information; determine whether a general user path sequence where the third path is sequentially connected is included in the path graph; and register the general user path sequence as a verification target when the general user path sequence is not included in the path graph.
8. The computing device of claim 7, wherein the program further comprises instructions to: infer an intention of the general user path sequence based on labeled information registered with respect to the general user path sequence; and generate a recommended path sequence for achieving the inferred intention as guide information for a general user.
9. The computing device of claim 8, wherein the instructions to generate the guide information comprise instructions to: select at least one element of which an attribute is related with the inferred intention from among a plurality of elements, which are units for generation of transition between application states, and at least one of which is included in a page that forms the application; and generate at least one path sequence generated based on the at least one selected element as the guide information.
10. The computing device of claim 1, wherein the program further comprises instructions to: generate a plurality of workflows where a plurality of elements in which the user’s action is generated are connected based on an order of use; calculate an occurrence frequency of each of the plurality of workflows; provide the plurality of workflows as guide information to a user device by arranging the plurality of workflows in order of highest frequency of occurrence; and distribute at least one of the plurality of workflows selected by the user device as a personalized menu to the user device.
11. The computing device of claim 1, wherein the program further comprises instructions to: form and distribute a user manual based on a user pattern inferred from the user behavior information; and update the user manual based on the user behavior information.
12. An operation method of a computing device operating by at least one processor, the operation method comprising: collecting behavior information of a user using an application; learning elements where are extracted from the behavior information and labeling a meaning of each element; generating a connection structure of the elements based on the labeled meaning, and generating user paths that indicate state transition information of the application, sequentially switched by the elements from the connection structure; generating user path sequences where the user paths are sequentially connected; and generating a test scenario for testing the application by using the user path sequences.
13. The operation method of claim 12, wherein the generating of the test scenario comprises: inferring a meaning corresponding to each user path sequence by learning the user path sequences using a machine learning algorithm; labeling each of the user path sequences with the inferred meaning; and generating a test scenario for testing the application among the respective user path sequences based on the labeled meaning.
14. The operation method of claim 13, wherein the user path sequence is a repeated path sequence that is repeatedly generated in collected behavior information, and wherein, in the generating the test scenario, a purpose of the repeated path sequence is labeled, and the repeated path sequence labeled with the purpose is generated as a test scenario.
15. The operation method of claim 12, wherein the generating of the test scenario comprises: generating default paths that indicate state transition information of an application, switched by a plurality of elements extracted from screen information crawled in the application; generating a default path sequence where default paths are sequentially connected; generating a path graph that indicates a set of default path sequences and the user path sequence; and generating path sequences that are connected from at least one start state to at least one end state as at least one test scenario based on the path graph.
PCT/US2020/020045 2019-09-25 2020-02-27 Test automation of application WO2021061185A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2019-0118287 2019-09-25
KR1020190118287A KR20210036167A (en) 2019-09-25 2019-09-25 Test automation of application

Publications (1)

Publication Number Publication Date
WO2021061185A1 true WO2021061185A1 (en) 2021-04-01

Family

ID=75164960

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/020045 WO2021061185A1 (en) 2019-09-25 2020-02-27 Test automation of application

Country Status (2)

Country Link
KR (1) KR20210036167A (en)
WO (1) WO2021061185A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220171510A1 (en) * 2020-11-10 2022-06-02 T-Mobile Usa, Inc. Automated testing of mobile devices using behavioral learning

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102451099B1 (en) * 2021-06-29 2022-10-07 주식회사 소프트자이온 Purchsase inducement system and method through inferencing user's needs based on artificial intelligence
KR102456354B1 (en) * 2022-05-31 2022-10-21 부경대학교 산학협력단 Arduino-based smart recirculating aquaculture practice system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050054381A1 (en) * 2003-09-05 2005-03-10 Samsung Electronics Co., Ltd. Proactive user interface
US20140122043A1 (en) * 2012-11-01 2014-05-01 University Of Nebraska Linking graphical user interface testing tools and human performance modeling to enable usability assessment
US20190235726A1 (en) * 2018-01-31 2019-08-01 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing intelligently suggested keyboard shortcuts for web console applications

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050054381A1 (en) * 2003-09-05 2005-03-10 Samsung Electronics Co., Ltd. Proactive user interface
US20140122043A1 (en) * 2012-11-01 2014-05-01 University Of Nebraska Linking graphical user interface testing tools and human performance modeling to enable usability assessment
US20190235726A1 (en) * 2018-01-31 2019-08-01 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing intelligently suggested keyboard shortcuts for web console applications

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220171510A1 (en) * 2020-11-10 2022-06-02 T-Mobile Usa, Inc. Automated testing of mobile devices using behavioral learning

Also Published As

Publication number Publication date
KR20210036167A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN107783899B (en) Method and device for testing H5 page in application program and computer equipment
Fazzini et al. Automated cross-platform inconsistency detection for mobile apps
US7810070B2 (en) System and method for software testing
US8504803B2 (en) System and method for creating and executing portable software
US9003423B1 (en) Dynamic browser compatibility checker
JP4395761B2 (en) Program test support apparatus and method
WO2021061185A1 (en) Test automation of application
US20130275946A1 (en) Systems and methods for test development process automation for a test harness
CN110928763A (en) Test method, test device, storage medium and computer equipment
US9501388B2 (en) Method and system for creating reference data
CN113505082B (en) Application program testing method and device
CN110825618A (en) Method and related device for generating test case
US20140082582A1 (en) Resource Tracker
Xu et al. Guider: Gui structure and vision co-guided test script repair for android apps
CN113590454A (en) Test method, test device, computer equipment and storage medium
CN109783355A (en) Page elements acquisition methods, system, computer equipment and readable storage medium storing program for executing
CN112231197A (en) Page testing method and device and storage medium
US10042638B2 (en) Evaluating documentation coverage
US11372750B2 (en) Test script for application under test having abstracted action group instantiations
US9104573B1 (en) Providing relevant diagnostic information using ontology rules
Mao et al. User behavior pattern mining and reuse across similar Android apps
CN117112060A (en) Component library construction method and device, electronic equipment and storage medium
CN111679976A (en) Method and device for searching page object
CN111414309A (en) Automatic test method of application program, computer equipment and storage medium
CN107797917B (en) Performance test script generation method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20867982

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20867982

Country of ref document: EP

Kind code of ref document: A1