GB2537341A - The sequenced common action and flow representation automation model (SCAFRA) - Google Patents

The sequenced common action and flow representation automation model (SCAFRA) Download PDF

Info

Publication number
GB2537341A
GB2537341A GB1500626.5A GB201500626A GB2537341A GB 2537341 A GB2537341 A GB 2537341A GB 201500626 A GB201500626 A GB 201500626A GB 2537341 A GB2537341 A GB 2537341A
Authority
GB
United Kingdom
Prior art keywords
test
framework
scenario
database
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1500626.5A
Other versions
GB201500626D0 (en
Inventor
Macdonald Craig
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Veefour Ltd
Original Assignee
Veefour Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Veefour Ltd filed Critical Veefour Ltd
Priority to GB1500626.5A priority Critical patent/GB2537341A/en
Publication of GB201500626D0 publication Critical patent/GB201500626D0/en
Publication of GB2537341A publication Critical patent/GB2537341A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3664Environments for testing or debugging software

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

A test tool comprises an application specific database of code components and an application agnostic execution framework. The framework executes scenarios, which comprise individual test steps, each of which is defined in the database. The test steps may emulate human input to the application under test. Data to be input to the application under test may be organized into tables in the database. Graphical user interface information may also be stored in the database. A given scenario may be executed on different devices supported by the database. The results for the test may be stored in the database.

Description

The Sequenced Common Action and Flow Representation Automation Model (SCAFRA)
DESCRIPTION
Background -software testing
The purpose Software testing is to prove that software behaves as per the specifications and requirements.
Business requirements and specifications are normally used to describe how a piece of software should behave.
Test conditions are derived from the business requirements and are then used to create test steps. Test steps are designed to exercise the software (in the form of user actions and expected results).
The actions to perform and expected results are used to form a test script.
Any detected deviation from the specified behaviors constitutes a 'defect'. The process of test execution on large and complex systems is often highly laborious and monotonous.
Test automation aims to increase defect detection by replicating the actions a human being would perform to verify the quality of the software being tested.
Typical problems with test automation Organizations with a requirement for test automation will often purchase an automated testing 'tool' (a piece of software capable of capturing and executing user actions).
Without a high level of practical expertise and training being readily available, these tools tend to be used for simple, high volume repetitive tasks such as data entry, typically using the recording and replaying features incorporated into the tool.
In order to realize the full potential of test automation and deliver a return on investment, an automation framework can be implemented.
An automation framework provides a means of controlling the execution of user actions, and usually requires a high level of expertise with the test tool and some ability to write basic program code. Usually, such frameworks are 'code-heavy' and opaque in their design, requiring excessive maintenance whilst delivering limited and highly specific test coverage. Manipulation of the testing processes encapsulated in the framework can prove cumbersome, and flexibility is often limited or non-existent. The test script, user actions and input data are often hard coded. Incorporating support for new test functionality and new scenarios in frameworks structured in this ad-hoc manner can he a highly complex and labour intensive undertaking.
Often, lack of available expertise or time results in an automated test 'pack' that isn't based on existing manual test scripts at all, and exists only to perform basic system checks (also referred to as a 'smoke test'); Providing estimates of test coverage or investment return on packs of this nature is difficult.
If an organization has multiple software applications requiring automation, it is likely that these applications will have their own bespoke frameworks, increasing the maintenance requirement and developmental workload dramatically.
Framework development and maintenance usually remains within the remit of an automation expert or select members of the manual testing team.
Non-technical users are generally excluded from development, execution and results analysis, which can erode business confidence in the automation process Solving the traditional automation problems (what the invention does) The Sequenced Common Action and Flow Replication Automation model (SCAFRA) solves the problems of creating and maintaining test automation by providing: 1. A simple, intuitive and consistent framework structure for developing test steps and test scenarios 2. A logical and transparent code structure with functions dedicated to processing instructions as they are presented, in sequence (instructions are stored independent of the code) 3. Comprehensive methods for managing test data and deploying test data strategies.
4. The ability to develop new tests without amendment to the core code set 5. Complete removal of test processes, test data or application flows from code.
6. True test tool and application-under-test (AUT) agnosticism: the principle features of the SCAFRA framework design are deployable in any test automation tool capable of interacting with the chosen application platform and the instruction / command database (the framework database) 7. Methods for sequencing tests, grouping tests for execution based on functional designations, distribution of the test workload and real-time reporting.
The SCAFRA framework design is a simple and dependable method for the development and execution of automated tests, using a combination of an object-oriented code set and an automation framework database.
The SCAFRA framework design is independent of any automated testing software or application technologies.
The framework is structured to target the common actions performed during application interactions -interactions ubiquitous to all applications whose state is dependent on human input or data presentation, referred to in the framework as the navigation action classifications. Using this method removes any ambiguity associated with conversion of a manual test, and enables automation of complex AUT functionality with minimum effort.
Automated tests in the SCAFRA framework are very similar in behavior and appearance to their manual counterparts -and in most cases are actually identical. The design and flow of the source manual test script governs the design and flow of the automated equivalent.
The Framework design is structured to channel the conversion process, resulting in robust and repeatable tests. The options presented to the end user undertaking the task of test automation are clearly limited in scope (navigation action type), but no restrictions are placed on combinations of options in a given sequence (a sequence of actions forms a test step). Once the sequence of actions has been captured and stored in the framework database, it can be referred to as a test step, and can he referenced by any other test authored by any other user.
The framework code set and framework database (features) The framework code set (FWCS) is comprised of functions for the recall and execution of test step instructions, direct interaction with the application under test (AUT), and reporting of action outcomes.
The requests for test actions and the data to input exist as sequenced instructions in the FWDB. The test engineer selects and combines basic actions in a logical sequence, mirroring the AUT behaviors (or deliberately not mirroring the flow to provoke errors) to form a test step. Once created, the test step is given a 'name' (the operational dataset) which can then be referenced by other tests if required, minimizing effort and maximizing re-use. These test steps form the basic building blocks of any test executed via the framework.
Test steps are combined to form test scenarios.
The test scenarios, navigation instructions and AUT input data exist purely in the FWDB; no physical test instructions or data exist in the framework code. The framework code functions interpret and processes the instructions extracted from the FWDB at run time.
Automation behavioral patterns are strictly controlled and centralized within the object-oriented framework code set. This delivers many benefits -most notably: * true scriptless automation: test instructions and data exist entirely independent of the framework code. The code exists purely to interpret instructions held in the FWDB.
* Simplicity: targeting the lowest level actions and removing any anticipatory logic from the automation code eliminates processing complexity and provides single points in the code set for behavioral analysis and de-bugging.
* ease of maintenance: the code is transparent and centralized, meaning less time is required to investigate and fix in the event of framework code related failures.
* GUI identification information storage -all GUI identification data is stored within the FWDB, and can be manipulated at any time in order to facilitate execution against multiple versions of the same AUT.
* Reliability: the logic of the FWCS dictates that execution restarts or ceases in the result of a terminal scenario failure (scenario termination). Individual test failures do not result in the entire execution cycle being abandoned.
Through implied simplicity, use of natural test language and naming conventions, non technical users are able to develop and execute test scenarios as easily as they would be able to manually.
Individual framework test steps (operations) have three primary classifications: Classification Explanation Navigation User actions carried out against the AUT, including: human inputs (physical actions) data treatment Verification Verification points: Checks performed to ascertain the application state.
(these checks may also be integrated into Bespoke 'Special' application behaviors that do not conform to the standard step model for navigation or verification, and may require their own dedicated functions in the FWCS.
All framework test steps, regardless of classification, have a unique identifier, referred to as an operational clataset.
The scope of actions contained within each navigation-type operational dataset is defined only by the user. No limitations are placed on the number of navigational sub-steps (actions, inputs) that can be referenced within that step.
Once created, the operations are available to all other users of the framework for inclusion in their tests if required.
Operation steps can be run in isolation for the purpose of debugging, by calling the operation execution function with the appropriate arguments.
All developed test scenarios can be run individually by using a call to the scenario execution function, or collectively using the test execution function. If run collectively, the scenario index table is used as a driver for selection of the tests to be run.
Developed test scenarios can be classified by functional area (subsystem), and grouped as such for targeted execution against functional areas in the AUT. They can also be sequenced by allocation of a sequence index, enforcing the order of execution.
Test data management AUT form input representation tables (screens within the software designed to receive data input) are treated as distinct storage entities within the FWDB; each occurrence of these forms has their own data storage table in the FWDB, named logically to create a visual association with the physical software input displays -i.e. the Login 'form' has a corresponding FWDB table known as 'AUT login'; The columns of the form input representation tables correspond to the actual input fields in the AUT -for example, if the 'login' form contained two fields -username' and 'password' -the AUT login table will contain two columns named logically to establish a visual relationship between the FWDB table & the actual AUT table as displayed.
Input values held within the AUT form input representation tables are indexed by 'data identifier'-an identification index declared at step level. The actual input values can be: Input value type Explanation Literal The value supplied is input as presented with no treatment or translation Referential The value supplied refers to a pre-loaded array / collection key. Used to input values already extracted from an external data source, such as the AUT database for tests that rely on pre-loaded or existing data (trigger values') Comparative The value supplied is compared against a specified value (explicit), an externally calculated value or a reference to a pre-loaded value (a global scenario value / data item) SQL Simple SQL statements to generate values for input -used often for extraction of specific values from a database, or for SQL date string preparation functions.
Behavioural Values that trigger specific input behaviors -including: * failure provocation -deliberate error or value provocation, where the 'normal logic' mode of input treatment is reversed -i.e. a failure to input due to AUT rules is a 'pass' (would normally report as a failure)
* clearing existing field values
* other specialist selection / input behaviours for 'exotic' AUT field objects Extraction 'scraping' of presented values from an AUT display for storage and output file preparation There are no restrictions placed on the nature of the data or how it is arranged. The framework simply interprets values as presented; if no value is presented for a given field in a navigation step involving value input, that field will receive no input. There are also no restrictions whatsoever on the number of data input navigation steps in a scenario.
Workload management and test execution The workload of test execution (the volume of test scenarios to run) can be distributed between the available execution devices (workstations, mobile devices) by assigning a machine name' to each scenario record in the scenario index table.
Total test workload flow: TEST EXECUTION 4 SCENARIO EXECUTION -*OPERATION EXECUTION As each execution device commences test execution, it refers to a locally stored initialization file for its designated device name and test run group ID.
Run time test governance parameters The execution device then uses the test run group ID to collect global execution parameters for the test run -including a designated run name used to organize test results, provide real time monitoring, and to generate a list of scenarios to execute for the specific device.
Test Execution Before test execution commences, a query is sent to the FWDB to create a scenario execution recordset (list, from the scenario index table) of test scenarios allocated to the executing machine, sorted by priority (execution sequence), and identified by the locally stored device (machine) name. The executing machine then iterates through the recordset, obtaining the designated scenario index (the unique identifier for the test scenario) which is subsequently used to obtain the scenario instruction records.
Scenario Execution Each scenario identified for execution in the scenario recordset prompts a call to the scenario execution function, which in turn queries the scenario control table for the scenario control recordset (list, from the scenario control table), sorted by step index (ascending, low to high), for the provided scenario index. The records contained in the scenario control recordset provide the following information: 1. The OPERATION to perform (named as per the application that will be receiving interaction -used to establish a functional grouping) 2. The OPERATIONAL DATASET -a reference to the actual step to perform within the functional grouping of the OPERATION. This value is used to query the OPERATIONAL CONTROL TABLE for the actual navigation and input data which will be used within that individual step.
3. Supplementary information such as comments etc. Operation Execution The operation execution function implements switch logic based on the provided operation parameter, using the primary operation categorisations (navigation, verification, bespoke) described earlier in this document. The behaviour of each category is described as follows: (generic steps) Each step (operation) found in the scenario execution recordset prompts generates a call to the operation execution function. The operation execution function receives the parameters for the operation type (the operation) and the step identifier (the operational dataset). The operation parameter determines which operational control table is referenced for the control data via records held in the available operations table. Once the relevant operational control table is established, that particular table is queried for the step execution using the operational dataset as the interrogating index. The operational control table contains the following information (as a minimum): The reference navigation operation -this identifies the navigation action group which defines the actual physical interaction against the AUT; 'navigation' actions are defined in the navigation data by operation table, grouped by the reference navigation operation index provided; navigation actions can be singular or grouped (with no limitation on the number of individual action steps or the definition of a 'navigation action) -the definition of a navigation action is at the discretion of the automation engineer -but it is best practice to create navigation actions that mimic test steps, whilst maximising the possibility of re-use through generic design.
The data identifier -this identifies the input data which will (if required as defined in the navigation data) he recalled from the AUT input representation table and sent into the AUT.
( 'bespoke 'steps) Bespoke functionality is beyond the scope of this description as the methods used are tailored specifically to the requirements of that functionality.
(verification points) Verification Halt point resumption As tests are executed, the framework code set updates the sequence store table with entries showing the test scenario being executed in real time. In the event of a catastrophic failure, or a requirement to pause execution, the executing machines can resume from the point of failure (the scenario being executed when the failure occurred) by interrogating the sequence store table to obtain the last scenario being executed. Removal of the sequence value for the executing machine will result in execution beginning from scratch Reporting As individual test steps are executed, results information is generated and output to the key reporting tables in the FWDB: Table Statuses: Information Scenarios executed The main reporting table for all scenarios executed, listing the scenario index, the executing device, the time and date of execution for each scenario, the execution statuses (see below) and the failure point (the test step that generated the fail state), if applicable.
STATUSES PASS:
The scenario completed as per expectations, and all verification points matched against the available baseline. No further investigation is required, but the test strategy may require a random sampling exercise to check that passes are legitimate.
FAIL The scenario failed at a particular step (the failure point) but the framework was able to restart execution and process to completion. Investigation will be necessary.
VP MISMATCH DETECTED: The scenario was able to complete using the included steps, but the captured states did not match the expected results stored.
Investigation will be necessary.
VP RESULTS INCONCLUSIVE: The scenario was able to complete using the included steps, but 1 or more verification points for state capture failed to yield meaningful results -usually due to missing baselines (nothing to compare against). Investigation will be necessary.
SCENARIO TERMINATED: The framework was unable to continue execution of the scenario beyond a specific point (the failure point). Execution of that scenario was abandoned.
Scenario detail listing Shows the individual results of each step as executed within the scenario, with true if successful or false if failed, along with a path to a screen shot.
VP executed * Shows the results of verification points (AUT state checks), including: * the step index for the check * the designated name of the check * the 'type of check performed the result (PASS / FAIL) * detailed findings, including references to captured screenshots and difference files (if applicable) * a timestamp Verification point classifications The verification capabilities and options may vary depending on the chosen test tool used for execution, but will generally fall into the following classifications: Classification explanation Bitmap Capture screenshot (or specific screen area) and compare against an established baseline Text (including HTML capture) Capture screen text (or specific values) and compare against an established baseline SQL Execute dynamic SQL against the AUT database, store the recordset & compare against an established baseline Self Contained Check application values at run time against specific pre-loaded values. These checks are often contained within navigation steps.
The invention will now be described by way of example and by reference to the accompanying drawings in which: Figure 1 shows an example manual test scenario, along with the equivalent FWDB scenario recordset, to demonstrate deployment of developed automated test steps (operational datasets). Note that the actions and expected results are separated in order to maximize re-use. Steps are included in the same logical sequence, and that logical sequence is enforced by the 'operation sequence index'.
Figure 2 shows how an individual test step is deployed in the FWDB, demonstrating the relationship between the operational dataset and the actual physical navigation actions performed within that step (operations and navigation actions), as well as the logical sequencing of those individual navigation action sub-steps that comprise a navigation step in its entirety. Figure 2 also shows an example of how GUI object identification information is stored within the navigation data table.
Figure 3 shows how navigation sub-steps are ordered, and how providing values in each navigational action column will trigger that action against the designated GUI object.
Figure 3a shows a navigation sub-step to trigger AUT form representation table input, as well as an example of how AUT input data is referenced, via the data identifier index declared at step level. Note that multiple operational datasets for a data range can exist in the operational control table, referencing the same navigation actions repeatedly.
Figure 4 shows how GUI object identification information is stored in the FWDB -either in field type list table (for AUT form representation input) or in the navigation data table (for navigational action inputs), and how that identification information relates back to the actual AUT screens (mock ups provided).
Figure 5 shows how navigational data presents the desired sequence to match the AUT flow (or deliberate error provocation flow for negative tests). An example of AUT synchronization is also included, with AUT screen mock-ups to demonstrate the relationships between steps and AUT states.
Figure 5a shows how individual navigational data sub-steps are processed, via specialised code functions in the FWCS interaction layer. During maintenance of existing support, or inclusion of new support, the FWCS interaction layer is generally the only place where code is altered.
Figure 6 shows the execution flow of a scenario executed via the framework, with input parameters and core functions.
Figure 7 shows how test scenarios are distributed between execution devices, based on data stored in the scenario index table.
Figure 7a shows the flow of data through the scenario execution function, including the key parameters used (and sources for those parameters) for the purposes of generating a list of scenarios to execute.
Ref Detail Fig 1 Working example of a simple test scenario -manual script and automated equivalent in FWDB Fig 2 Working example of a test step referenced by a test scenario Fig 3 Working example of a navigation record referenced by a test step Fig 3a Working example of a navigation record referenced by a test step, containing a call for AUT table input Fig 4 Example of application 'mapping' deployment in FVVDB -login screen of typical app Fig 5 Working example of navigation action instruction mapping (screen flow example) Fig 5a Working example of how navigation instructions are interpreted by the framework code set (navigation processing) Fig 6 Flow chart for framework code set -scenario execution Fig 7 Workload distribution (assigning tests to execution machines -the scenario index table) Fig 7a Flow chart for framework code set -Scenario Execution function Components The Framework Database (FWDB) is the repository schema for captured application under test (AUT) behaviors, contained as records in control tables; It contains sequenced instructions, GUI identification data, AUT input data, fonn input representation tables (AUT tables), expected results, master control tables and workload sequencing and distribution tables.
The Framework Codeset (FWCS) is an object-oriented collection of function libraries that act purely as an interpreter for the sequenced instructions (user actions) contained in the FWDB, and passes back test result information to the FWDB after those actions have been performed.
The FWCS consists of the following layers (across multiple code libraries): Layer Purpose maintenance frequency global test information collection layer Collection and storage of global framework control parameters, such as: Rarely * The target baseline for comparison against (previous AUT versions or code streams etc) * Test behavior boundaries -timeout values, repeat attempts * A 'run name' used to group the results output * Destination URLs, file directory structures, environmental settings the 'SQL spine': * Test action instruction recall Very Rarely * SQL statement execution (internally stored data, externally stored data, verification)
* Dynamic statement or procedure execution
Data treatment and input preparation * Preparation of input values Very rarely (see 'trigger values, section) * Value substitution (dynamically or from values held in memory) AUT interaction * Actual interaction with the application under test -object manipulation commands, synchronization, data input Whenever a new AUT class / code / platform / language is incorporated into the support scope.
* Verification (from GUI) Reporting * High level Results feedback from framework execution -scenario Rarely pass/fail status * Detailed The primary principles / features of the Scripdess Step Driven Flow Capture Test Automation framework design are: a) test tool agnosticism and independence a. the framework design components are not tied to any software package or suite.
b. The framework functionality can be ported / adapted to work with the most appropriate test tool for interaction with the software or application under test (AUT) b) Application independence a. The framework codeset (FWCS) is structured to allow the incorporation of new AUT support with well contained and minimal code changes in specific libraries that are dedicated to AUT interaction.
b. Universal methods common to all AUTs that are deemed suitable candidates for automation are already incorporated into the FWCS AUT interaction layer -incorporation c) Framework Code Set Design principles a. To ensure simplicity within the executing framework codeset, AUT logic capture within the code is minimized, therefore minimizing the codeset maintenance overhead. The framework code set does not perform any AUT anticipatory activities -the behavior is driven purely by the order in which the instructions are sequenced.
b. the structure of the automated test, where driven by / sourced from an original manual test script, mimics the structure and sequence of that original test. Enhancements provided by automation can he added to improve test coverage at the discretion of the test engineer.
c. Support for Reaction logic -adaptation or selection of optional operations based on real-time application state is supported under the following conditions: i. The necessary trigger for the optional behaviour is transparent and consistent ii. The AUT behavioural f low is unaffected after optional step processing is performed -beyond the boundaries of that step, the subsequent actions and endpoints remain constant iii. The AUT behaviour is consistent and reproducible (if this is not the case, it might be considered that the AUT is not a suitable candidate for automation).
d) re-usability of test steps (or operational datasets) and test scenarios a. An automated test scenario is a user defined, logically sequenced and incrementally indexed collection of operations, assembled for the purpose of verifying AUT behaviors. The emphasis is on the engineer to correctly sequence the operations within the test scenario.
Test scenarios exist only in the framework database.
There are no restrictions on re-use of operations or test input data between scenarios.
Each Test scenario is given a scenario index -a numerical value to uniquely identify that scenario in the automated test suite.
b. An operation is a user-defined, distinct and re-useable block of test actions to be performed. The scope of captured behaviors within the operation is unrestricted by code, and defined by the creator, but should, for ease of coverage measurement & traceability, mimic the original manual test step as closely as is practical.
c. Test steps are created by the test engineers perform the following actions (this list is not exhaustive) i. Test preparatory activities -for example: data manipulation and/or insertion, application launching, application of settings within the AUT etc -where such behaviors cannot be accommodated in the universal navigation action classifications (see section d'), ii. Navigation -for functional tests against a GUI based AUT, navigation actions emulate the inputs a human tester would perform to generate the test conditions. These actions include data input (AUT table input), and navigational actions in line with the universal navigation action classifications (see section d'), iii. Verification points -a defined point within the scenario flow where the application state should be captured and compared against an established baseline or dynamically stored values held in memory. Examples of verification points (this list is not exhaustive): * Screen capture (whole or partial) * Displayed content (from a GUI object * Object state capture (if the chosen Automation tool supports such methods) * SQL record set capture (and output file generation if required) * Any other bespoke verification method If the developed steps incorporate specific synchronization in their navigation instructions, the captured AUT behavior also fulfills the requirements of flow verification.
e) Execution logic Scenario Execution a. the scenario execution function within the FWCS is provided with the scenario index, which is used to generate a collection of steps to execute for that scenario.
The scenario execution function obtains a group of instructions to process from the scenario control table (FWDB).
The scenario control table contains the following key information items (columns): Data Item Explanation Scenario index The identifier for the scenario (numeric) Operation sequence index The sequence index for the test step (numeric) Operation Identifies the source for operational control data (operational control table), via lookup against a cross-referencing table* *where multiple A UTs are supported Operational dataset The specific identifier for the instruction set to process (the operation / action to perform) Include flag Indicates whether the step is included in execution (if null, the step is not included) Debug pause For developmental purposes, if a non-null value is set, execution will pause.
This is used primarily for debugging automated scenarios.
comments User comments, optional The function iterates through the test scenario instructions (operations, operational datasets) sequentially.
An operation execution function exists in the FWCS, called from the scenario execution function, that receives the operational dataset and processes the instructions contained in the operational control table (FWDB), which include the navigation actions to perform (reference navigation operation) and the AUT input data to process if applicable (the data identifier).
The operation execution function reports the outcome of the attempt.
If successful, the operation execution function returns a positive code and the scenario execution function proceeds to the next step.
If unsuccessful, the operation execution function returns an error value and, subject to a global parameter value for repeat attempts being greater than zero, will return to the first step and start again.
If subsequent attempts fail (up to the maximum allowed number of repeat attempts) the scenario is reported as 'terminated' before being abandoned.
A controlled shutdown procedure step may be declared for steps beyond a set point within a test scenario to ensure that any environment specific rules are adhered to -for example, controlled logging out to preserve availability of a specific login, etc. The first steps of a test scenario will provide the necessary pre-conditions for the test, ensuring that the AUT is in the correct state to receive a subsequent attempt (if configured).
Operation execution the operation execution function within the FWCS is provided with the operation type and operational dataset, which is used to obtain the instructions to execute for that particular test step.
Operation Type: Common (non bespoke): Common operations account for the majority of test steps, having behavior that fits neatly into the universal navigation action model.
The operation control table (FWDB) contains the following key information items (columns): Data Item Explanation Operational Dataset The identifier for the operation Reference Navigation Operation The navigation instructions for the step Data Identifier the index for AUT input data to collect (see the AUT data modeling section) Comments User comments Input skip flag Used for pre-requisite steps -if not null, the framework code set will process the data collection without performing the physical AUT action.
This feature is mainly used to declare and store key query parameters for subsequent data mining operations.
Drop flag If not null, the step is ignored.
The function iterates through the test scenario instructions (operations, operational datasets) sequentially.
I) Bulk test execution and workload distribution All scenarios to be executed (marked as active) are indexed in a separate table within the FWDB. This table also provides an allocation mechanism (by declaration of a device or devices nominated to run a specific scenario) during parallel execution.
Each executing device has a 'machine name' allocated which is then used to gather a list of tests to run when that machine commences execution.
During execution, each 'machine' updates a table within the FWDB (sequence store) that holds information on the index of the test being performed. If a test run is halted unexpectedly or deliberately, upon resumption of the run each machine will interrogate the sequence storage table and resume execution, re-running the incomplete scenario and continuing execution from that point onwards.
g) AUT data modeling Aut entities a. the framework database component enforces and mimics the structure of the input mechanisms within the AUT -including (but not limited to): i. graphical user interface (GUI) input tables (AUT tables) ii. underlying AUT database tables (non GUI) h) Universal navigation action classifications The framework code set (FWCS) AUT interaction layer is designed specifically to target the universal navigation action classifications; each classification has its own dedicated navigation function or sub clause within the overall navigation processing function.
i) Self contained GUI object identification data Universal navigation actions are triggered by providing the extracted GUI description for the object (the target of the input). This information is stored within the framework database and can be substituted at runtime to accommodate multiple AUT versions.

Claims (9)

  1. CLAIMS: 1. A test tool and application agnostic test automation framework design comprised of two components -the framework database and framework code set, where the code set exists solely to collect, interpret and execute sequenced instructions held in the framework database, via dedicated code components dedicated exclusively to processing user actions against the application under test at the lowest level, completely devoid of any anticipatory logic.
  2. 2. A test automation framework design according to claim I, in which the automated test (user inputs and user data) exists solely as sequenced instructions within the framework database, entirely outside the framework code or chosen automation tool.
  3. 3. A test automation framework design according to claim 1, in which the test scenarios are comprised of individual test steps (operational datasets), existing only within the framework database, which are infinitely re-usable and entirely unlimited in scope.
  4. 4. A test automation framework design according to claim 1, in which developed test steps are used to trigger human input behaviors within the code set for the purpose navigating through the software under test (performing the test actions), and can be executed independently of the test scenario for the purposes of test scenario development.
  5. 5. A test automation framework design according to claim 1, in which specific navigational instructions -which form the actual execution instructions of a test step -are assembled by means of sequenced actions to be carried out against user interface objects, within the boundaries of the navigation action classifications (object interaction types and methods) relevant to the application receiving input, for either positive or negative testing methods.
  6. 6. A test automation framework design according to claim 1, in which the test data to be input is organized into distinct form representation tables within the framework database, used to store input data and provide logical points of reference for data maintenance, organized by the data identifier.
  7. 7. A test automation framework design according to claim 1, in which any graphical user interface identification information (parent windows, child objects) necessary to facilitate application GUI object recognition / input within the chosen automation tool is stored entirely in the framework database, is used to trigger navigation actions and process input requests, and can be swapped / substituted for alternative values at run time where necessary.
  8. 8. A test automation framework design according to claim I,that allows the allocation of test scenarios across multiple execution devices (workstations, mobile devices) in a specified execution sequence, and targeted test selection by means of application and functional area classifications.
  9. 9. A test automation framework design according to claim 1, where test outcome results are generated by the dedicated functions used to execute the test instructions, and are stored within the framework database, organized by the name of the test run as designated by the test engineer.
GB1500626.5A 2015-01-15 2015-01-15 The sequenced common action and flow representation automation model (SCAFRA) Withdrawn GB2537341A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1500626.5A GB2537341A (en) 2015-01-15 2015-01-15 The sequenced common action and flow representation automation model (SCAFRA)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1500626.5A GB2537341A (en) 2015-01-15 2015-01-15 The sequenced common action and flow representation automation model (SCAFRA)

Publications (2)

Publication Number Publication Date
GB201500626D0 GB201500626D0 (en) 2015-03-04
GB2537341A true GB2537341A (en) 2016-10-19

Family

ID=52630604

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1500626.5A Withdrawn GB2537341A (en) 2015-01-15 2015-01-15 The sequenced common action and flow representation automation model (SCAFRA)

Country Status (1)

Country Link
GB (1) GB2537341A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945006A (en) * 2017-11-15 2018-04-20 深圳市买买提乐购金融服务有限公司 A kind of business management system and method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287097A (en) * 2019-05-20 2019-09-27 深圳壹账通智能科技有限公司 Batch testing method, device and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6002871A (en) * 1997-10-27 1999-12-14 Unisys Corporation Multi-user application program testing tool
US6067639A (en) * 1995-11-09 2000-05-23 Microsoft Corporation Method for integrating automated software testing with software development
US20070220341A1 (en) * 2006-02-28 2007-09-20 International Business Machines Corporation Software testing automation framework
US20110258600A1 (en) * 2010-04-19 2011-10-20 Microsoft Corporation Using a dsl for calling apis to test software
US20140215439A1 (en) * 2013-01-25 2014-07-31 International Business Machines Corporation Tool-independent automated testing of software

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6067639A (en) * 1995-11-09 2000-05-23 Microsoft Corporation Method for integrating automated software testing with software development
US6002871A (en) * 1997-10-27 1999-12-14 Unisys Corporation Multi-user application program testing tool
US20070220341A1 (en) * 2006-02-28 2007-09-20 International Business Machines Corporation Software testing automation framework
US20110258600A1 (en) * 2010-04-19 2011-10-20 Microsoft Corporation Using a dsl for calling apis to test software
US20140215439A1 (en) * 2013-01-25 2014-07-31 International Business Machines Corporation Tool-independent automated testing of software

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945006A (en) * 2017-11-15 2018-04-20 深圳市买买提乐购金融服务有限公司 A kind of business management system and method

Also Published As

Publication number Publication date
GB201500626D0 (en) 2015-03-04

Similar Documents

Publication Publication Date Title
EP3246818B1 (en) Functional behaviour test system and method
US9098633B2 (en) Application testing
AU2017327823B2 (en) Test case generator built into data-integration workflow editor
US8694967B2 (en) User interface inventory
US9298596B2 (en) Test framework for computing jobs
CN108897571B (en) Program packaging deployment method, device, system, electronic equipment and storage medium
US11138097B2 (en) Automated web testing framework for generating and maintaining test scripts
KR20120121950A (en) Application Graphic User Interface Test Automation System and Method Thereof
US20150154097A1 (en) System and method for automated testing
Wahler et al. CAST: Automating software tests for embedded systems
US11169910B2 (en) Probabilistic software testing via dynamic graphs
US9983965B1 (en) Method and system for implementing virtual users for automated test and retest procedures
CN103455672B (en) A kind of FPGA emulation testing use-case automatization homing method
Matalonga et al. Matching context aware software testing design techniques to ISO/IEC/IEEE 29119
Ghosh et al. A systematic review on program debugging techniques
GB2537341A (en) The sequenced common action and flow representation automation model (SCAFRA)
CN113485928A (en) Automatic testing method and device for switch
CN104536880A (en) GUI program testing case augmentation method based on symbolic execution
TW202307670A (en) Device and method for automated generation of parameter testing requests
Wienke et al. Performance regression testing and run-time verification of components in robotics systems
Jain et al. Comparative study of software automation testing tools: OpenScript and selenium
Kim et al. A study on test case generation based on state diagram in modeling and simulation environment
Köckerbauer et al. Scalable parallel debugging with g-Eclipse
Nieminen et al. Adaptable design for root cause analysis of a model-based software testing process
CN117234946B (en) Automatic test method and related equipment for project library system

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)