WO2017142393A1 - System for managing user experience test in controlled test environment and method thereof - Google Patents

System for managing user experience test in controlled test environment and method thereof Download PDF

Info

Publication number
WO2017142393A1
WO2017142393A1 PCT/MY2016/050085 MY2016050085W WO2017142393A1 WO 2017142393 A1 WO2017142393 A1 WO 2017142393A1 MY 2016050085 W MY2016050085 W MY 2016050085W WO 2017142393 A1 WO2017142393 A1 WO 2017142393A1
Authority
WO
WIPO (PCT)
Prior art keywords
test
user experience
user
plan
system under
Prior art date
Application number
PCT/MY2016/050085
Other languages
French (fr)
Inventor
Fook Ann LOO
Shi Tzuaan SOO
Muhammad Dhiauddin MOHAMED SUFFIAN
Ashok SIVAJI
Original Assignee
Mimos Berhad
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mimos Berhad filed Critical Mimos Berhad
Publication of WO2017142393A1 publication Critical patent/WO2017142393A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3438Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment monitoring of user actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3414Workload generation, e.g. scripts, playback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3476Data logging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/40Data acquisition and logging

Definitions

  • the present invention relates generally to arrangement for user experience test. More particularly, the present invention relates to an improved system and method for managing a user experience test for a system under test in a controlled test environment.
  • Usability testing or user experience test is an important aspect in user centered design approach that has become a major focus of research activities. It is usually conducted once the product has reached certain design and development stages and the respondents have been duly recruited.
  • the user experience test provides some insights, understanding and recommendations about the user needs in relation to a product under testing, which will be useful for further improvements to the product later on.
  • the respondents or groups of potential users are required to complete specified routine tasks while being monitored by a moderator or tester who will observe, listen and take notes of the same. This can be done either in a usability lab, remotely or on-site with portable equipment.
  • the user experience test has extraordinarily benefited, in term of product review and efficiency, many technological fields digitally and physically such as website development, application or firmware for smartphone, etc.
  • U.S Publication No. 2004/0015867 A1 discloses an automated usability testing system and method.
  • the system comprises a test plan and a data logger creator for constructing a test plan and for collecting test data, respectively.
  • the system also comprises a log analyzer for summarizing the data log in a report.
  • Another problem is that the user experience test is typically conducted in a limited number of test environments due to the cost factor, which will lead to non-holistic review of a product.
  • the user experience test of a website for instance, is conducted through one network connection only such as LAN wired connection, where the real environment that contains many different network connections will never be achieved.
  • the different network connections in general have different characteristics.
  • the present invention provides a system for managing a user experience test in a controlled test environment.
  • the system of the present invention can be characterized by a test controller configured for providing and executing a test plan associated with the user experience test for a system under test to a set of users via a plurality of network connections through a user interface agent; an adaptive load generator agent configured for establishing a user traffic flow by simulating an anticipated system load of the system under test in accordance with the test plan thereof, wherein the user traffic flow represents actions and types of virtual users; an adaptive network emulator agent configured for establishing one or more emulated network connections by modifying at least three connection parameters including bandwidth, latency and packet loss of one or more of the plurality of network connections in accordance with the test plan thereof; an analyzer and issue management module configured for providing performance analysis and computing a feedback score in relation to user feedback transmitted by the test controller, wherein the feedback score is compared against a score threshold to trigger execution of another user experience test, a functional test and a performance test on a selected part of the test plan for reporting root cause analysis; a software release post-commit trigger module in communication with the analyzer and issue management
  • the system further comprises a test report and test log module configured for managing reports and logs generated during the user experience test; and a test plan management module configured for preparing the test plan of the user experience test including test tasks, test questions, test configurations, test environment settings, user guides, performance test scripts and functional test scripts.
  • the one or more of the plurality of network connections is selected based on Internet protocol (IP) address, network ports, media access control (MAC) address of source or destination, and network protocol.
  • IP Internet protocol
  • MAC media access control
  • the functional test relates to assessment of feature functions of the system under test.
  • the performance test relates to assessment of performance of the system under test in the user traffic flow and the one or more emulated network connections.
  • the root cause analysis is a comparison of results obtained from the user experience test, the functional test and the performance test.
  • the software release is an upgrade version of the system under test developed based on the root cause analysis.
  • the analyzer and issue management module generates a problem report comprising the user experience test, the functional test, the performance test and results thereof, and the root cause analysis to a developer of the system under test for upgrading the system under test prior to generation of the software release.
  • the method of the present invention can be characterized by providing and executing a test plan associated with the user experience test for a system under test to a set of users via a plurality of network connections; establishing a user traffic flow by simulating an anticipated system load of the system under test in accordance with the test plan; establishing one or more emulated network connections by modifying at least three connection parameters including bandwidth, latency and packet loss of one or more of the plurality of network connections in accordance with the test plan; providing performance analysis thereby computing a feedback score based on user feedback received thereof; comparing the feedback score against a score threshold to trigger execution of another user experience test, a functional test and a performance test on a selected part of the test plan for reporting root cause analysis; and monitoring a software release associated with the system under test thereby triggering download and execution of the same.
  • the method further comprises obtaining a problem report comprising the user experience test, the functional test, the performance test and results thereof, and the root cause analysis; performing a version upgrade, based on the problem report, to the system under test; and generating a software release associated with the system under test.
  • the present invention provides integration and connectivity with other suitable tests such as functional and performance tests which are closely related to the user experience test so that the tester would be able to draw a conclusion pertaining to the system under test accurately and quickly. Furthermore, the present invention advantageously provides root cause analysis based on a series of tests performed thereof to allow immediate identification and solving of the usability, functional and performance issues raised to a system under test. It is therefore another advantage of the present invention that closely resembles the real environment in which the product will be used in term of user traffic flow and network connections.
  • the present invention allows for creation of the test environments of the user experience testing indefinitely. It can be achieved by way of manipulating the user traffic flow in respect of actions and types of virtual users and the network connection in respect of its connection parameters.
  • Figure 1 is a schematic block diagram showing a system for managing a user experience test according to one embodiment of the present invention
  • Figure 2 is a schematic block diagram showing a test controller and other modules associated with the system for managing a user experience test according to one embodiment of the present invention
  • Figure 3 is a flow diagram depicting a method of managing a user experience test according to one embodiment of the present invention.
  • Figure 4 is a flow diagram depicting the step of comparing a feedback score against a score threshold of the method of managing a user experience test according to one embodiment of the present invention.
  • the present invention provides an improved system and method for managing an unmoderated remote user experience test (UET) within a controlled test environment that provides integration and connectivity with functional and performance tests. It is further an object of the present invention to closely resemble the real test environment for UET in term of user traffic flow and network connections. It is also an object of the present invention to allow indefinite creation of the test environments of UET.
  • the present invention is developed in a highly compact, cost-effective, and simple manner, without the use of complicated and sophisticated components.
  • the system comprises a plurality of modules and agents or engines suitably adopted and interconnected with each other to deliver the desired effects and objectives thereof.
  • the system comprises a test controller 100, a user interface agent 101 , an adaptive load generator agent 102, an adaptive network emulator agent 103, an analyzer and issue management module 104, a software release post-commit trigger module 105, a test setup agent 106, a test report and test log module 107, a test plan management module 108 and a functional test execution agent 109.
  • the test controller 100 is the main test execution module for the system of the present invention. It is connected to every other modules and agents employed by the system.
  • the test controller 100 on a first part, is preferably connected to the user interface agent 101 , the adaptive load generator agent 102, the adaptive network emulator agent 103, the test setup agent 106 and the functional test execution agent 109, which further allows indirect connection of the test controller 100 to a set of users and system under test (SUT), as assembled in Figure 1 .
  • the test controller 100 is connected to the analyzer and issue management module 104, the software release post-commit trigger module 105, the test report and test log module 107 and the test plan management module 108 besides the user interface agent 101.
  • the arrangement of the modules and agents is best depicted by Figure 2.
  • the test controller 100 may execute the test plan associated with the UET loaded from the test plan management module 108 via a plurality of network connections through the user interface agent 101.
  • the test controller 100 is further configured to monitor and control all the modules and agents in the system of the present invention.
  • the test controller 100 monitors and controls the test setup agent 106 to setup the SUT, the analyzer and issue management module 104 to analyze user feedback and provide performance analysis, the adaptive network emulator agent 103 to start network emulation, the adaptive load generator agent 102 to generate the desired user traffic flows for the testing, and the user interface agent 101 to give the test tasks to the users in question.
  • the test controller 100 receives user feedback from the users through the user interface agent 101 .
  • the test task and the user feedback may be in the form of graphical user interface (GUI) inputs, text, photo, video and many more.
  • GUI graphical user interface
  • the user interface agent 101 may comprise a user interface that will be used as communication means between the system particularly the test controller 100 and the users.
  • the adaptive load generator agent 102 can be adopted to create and establish a user traffic flow for the UET.
  • the adaptive load generator agent 102 will simulate an anticipated system load of the SUT, as specified by the test plan thereof.
  • the user traffic flow represents actions and types of virtual users that shall be applied as a system load during the UET to resemble the real environment of the SUT.
  • the user traffic flow generated thereof will simulate the virtual users performing different actions on the SUT; the reason being to simulate actual usage of the SUT in the real environment.
  • user types and user actions in the system of the present invention There are different user types and user actions in the system of the present invention.
  • Examples of user types can include, but not limited to, project managers, developers, test engineers, process engineers, manufacturing team and marketing team, who are virtually exist in the system.
  • Examples of user actions can include, but not limited to, project managers are generating reports from the SUT, developers are updating development progress in the SUT, and test engineers are uploading test results to the SUT.
  • the tester or administrators can specify the preferred system load to be generated and used on the SUT during the testing.
  • One of the purposes is to allow the system of the present invention to be able to simulate the anticipated system load in the real environment.
  • the tester can simulate five project managers who are generating reports from the SUT, 100 developers who are updating development progress in the SUT, and 50 test engineers who are uploading test results to the SUT.
  • the tester will then upload the associated automation test scripts of user actions and user types to the test plan management module 108 to be compiled with the rest of information in the test plan.
  • the adaptive load generator agent 102 under controlled by the test controller 100, simulates the user traffic flow accordingly.
  • the system load of the SUT will be the same as the real environment and project the actual user experience as in the real environment.
  • the test results would be more accurate as response time of the SUT is similar to the response time of that in the real environment.
  • the adaptive load generator agent 102 may be configured to simulate many other anticipated system load, for example, during special condition, year-end shopping season, school holiday season, stock clearance sale, etc.
  • the tester can also execute or run the UET with a series of different system load to check the changes in the user experience scoring when the system load and response time changes. The result from this exercise can be used as a reference for the administrator to upgrade the SUT when, for example, the response time drops below a certain threshold, which reflect unacceptable bad user experience.
  • the adaptive load generator agent 102 is an adaptive module which is capable to automatically change the number of simulated users based on the number of real users participate in the UET. For example, say the administrator would like to conduct an UET with five users doing registration and 20 users searching for products on a website. If there are one real user who is doing registration process and five real users who are searching for products on a website, the adaptive load generator agent 102 will adaptively adjust the system to simulate four virtual users doing registration and 15 virtual users searching for products. Alternatively, the adaptive load generator agent 102 can be used to simulate a fixed number of user traffic types regardless of the number of real users participated in the UET, which is implementable in the test plan. The adaptive load generator agent 102 is also configured to run performance test on the SUT, the results of which will be used by the analyzer and issue management module 104 to find the root cause of the user experience related issues.
  • the adaptive network emulator agent 103 can be configured to create and establish one or more emulated network connections for the UET. It is a network emulator module which is capable of changing the network connection parameters.
  • the emulated network connections can be achieved by way of modifying at least three connection parameters of one or more selected network connections of the plurality of network connections in line with test plan thereof.
  • the at least three connection parameters include bandwidth, latency and packet loss. Nevertheless, other connection parameters may also be used, such that the modifications and variations do not depart from the present invention.
  • the tester or administrator would be able to specify the preferred types of network connection to be emulated for the UET. Examples of network connections as well as emulated network connections can include, but not limited to, cellular 3G network connection, cellular long term evolution (LTE) or 4G network connection and wireless local area network of 802.1 1 family network connections.
  • LTE long term evolution
  • the adaptive network emulator agent 103 will act as a layer-2 network switch that is totally transparent to the network traffics. When network traffics of the network connections pass through the adaptive network emulator agent 103, the adaptive network emulator agent 103 will change the connection parameters of selected network connections as specified in the test plan. It is imperative to note that not all network connections that passed through the adaptive network emulator agent 103 will have their connection parameters changed. The adaptive network emulator agent 103 will only perform network emulation to the selected network connections as instructed by the test controller 100 through its test plan. That is to say, other non-selected network connections will not be affected and will pass through the adaptive network emulator agent 103 without any changes in their connection parameters. The network connections will be selected based on certain requirements or criteria.
  • requirements or criteria can include source or destination Internet protocol (I P) address, source or destination network ports, source or destination media access control (MAC) address and network protocol used thereof.
  • I P Internet protocol
  • MAC media access control
  • the SUT will be the same as the real environment as it projects the actual user experience in terms of network connection as in the real environment.
  • a real user is connected to the SUT at 100 Mbps, 0.01 ms latency and 0.001 % packet loss.
  • the adaptive network emulator agent 103 will change the original network connection to 1.5 Mbps, 120 ms and 1.5% packet loss to emulate a cellular 3G data connection, i.e. an emulated network connection.
  • the real users will be connected to the SUT via the emulated network connections.
  • the response time of the application will be affected because in the emulated network connection, the bandwidth is smaller, latency is higher and packet loss is higher as opposed to the original network connection.
  • the analyzer and issue management module 104 can be configured for providing performance analysis and for computing a feedback score in relation to the user feedback transmitted from the test controller 100.
  • the user feedback received from the users is preferably analyzed appropriately according to the test plan.
  • the user feedback which is in response to the test questions thereof will be gathered and computed based on the types of test questions the test controller 100 have put in to the users.
  • the scores for each of the test questions designated by the users will be summed up to generate a feedback score.
  • the feedback score is then compared against a score threshold set by the tester or administrator.
  • the feedback score may be normalized to a common scale such as an average value, which is also may be compared against a corresponding score threshold.
  • an individual score of each of the test questions may be compared to an individual score threshold, at this stage. Based on the comparison between the feedback score and the score threshold, the analyzer and issue management module 104 will trigger execution of another UET, a functional test and a performance test on a selected part of the test plan where the issues arise.
  • the functional test preferably relates to assessment of feature functions of the SUT, while the performance test is related to assessment of performance of the SUT in the user traffic flow and the emulated network connections.
  • the functional automation test script and the performance automation test script are uploaded by the tester in the test plan at the system initialization stage.
  • the functional test is preferably run by the functional test execution agent 109.
  • the analyzer and issue management module 104 will decide whether to execute the other stipulated tests or not based on the comparison result.
  • the results obtained from the UET, the functional test and the performance test are consolidated and further analyzed for root cause analysis.
  • the analyzer and issue management module 104 will immediately find the root cause of the usability or user experience issue(s) based on the results obtained from the three tests.
  • the root cause analysis is a comparison of results obtained from the UET, the functional test and the performance test thereof. Examples of analysis done by the analyzer and issue management module 104 can include but not limited to: i. If the users are not satisfied with a feature function of the SUT, but the functional test is passed, then it can be concluded that the issue is not caused by the feature functions; ii.
  • the users are satisfied with the feature function of the SUT, but not satisfied with the feature function when the SUT is executed at high system load of user traffic flow, then it can be concluded that the system performance is poor at that user traffic flow and the issue is caused by its performance; iii. If the users are not satisfied with the feature function regardless whether the system load is high or low, then it can be concluded that the issue is not caused by its performance, but it is caused by the system design such as workflow, layout, colors, etc. instead. The tester can obtain more detail about this system design issue from the users by using different types of test question such as open questions.
  • the analyzer and issue management module 104 will generate a problem report to the developer.
  • the problem report preferably comprises the UET, the functional test, the performance test and the results thereof, the user feedback along with the root cause analysis for the SUT.
  • the report can be generated by a bug tracking software like JI RA, Bugzilla, etc.
  • the developer upon receiving the problem report, will next decide for upgrading or modifying the SUT in order to solve the issues.
  • the root cause analysis will facilitate the developer in identifying or locating the parts of the SUT that need to be fixed.
  • test questions are scale questions where the users response by way of selecting a score from a scale of 1 to 5, e.g. scale 1 is "very bad” and 5 is “very good”, and a score threshold is 3.
  • scale 1 is "very bad” and 5 is “very good”
  • a score threshold is 3.
  • the analyzer and issue management module 104 will next calculate an average score of all the scores from the five users and compare the average score against the score threshold 3. If the average score falls below the score threshold, then the analyzer and issue management module 104 will trigger the test controller 100 to execute the functional test, the performance test and the UET on that particular function for the SUT. Thereafter, the results from the tests will be gathered and further scrutinized for the root cause analysis. Finally, the analyzer and issue management module 104 will generate a problem report and forward the same to the developer.
  • the software release generally refers to the distribution of a new or an upgraded version of computer software such as the SUT. Such the action will therefore trigger a signal transmission to the software release post-commit trigger module 105.
  • the signal is generated by the "post- commit hook" in the centralized repository.
  • the post-commit hook is run after the upgrading is committed and a new version is created.
  • the software release post-commit trigger module 105 which is in communication with the analyzer and issue management module 104 can be configured to monitor any issuance and distribution of a software release associated with the SUT from the analyzer and issue management module 104, particularly.
  • the software release post-commit trigger module 105 will subsequently trigger download of the software release at the test controller 100. Further, it also will trigger execution of the same at the test controller 100, whereby the test controller 100 will re-executes the test plan or executes a new UET on the upgraded SUT. Once the re-execution or new execution of UET is completed, the analyzer and issue management module 104 will perform its portion by way of providing analysis, comparing the feedback score with the score threshold, and eventually issuing a problem report for the SUT. The iteration cycle of the above-mentioned steps will continue accordingly.
  • Initialization of the controlled test environment for the UET including setup and configurations is preferably taken care of by the test setup agent 106. It is essentially capable of setting the operating environment that the SUT should be tested during the UET, in accordance with the test plan, the user traffic flow and the emulated network connections thereof.
  • the setup and configuration of the operating environment, i.e. the controlled test environment can include, but not limited to, type of hypervisor and virtual machines, size of random access memory (RAM), type and number of central processing unit (CPU) cores, size and type of hard disk, public cloud vendor, location of the public cloud virtual machine, type of operating system, type of server and type of application runtime.
  • the test setup agent 106 is basically run based on the instructions from the test controller 100. For instance, the test setup agent 106 will install and setup a new software release of the SUT released by the developer which contains bug fixes upon an instruction from the test controller 100.
  • the test report and test log module 107 can be configured to manage the reports and logs for the SUT generated during the UET.
  • the test logs and test report will be stored in the persistent storage thereof.
  • Figures 3 and 4 describe, in a step-by-step manner, the method according to one preferred embodiment of the present invention.
  • the method begins, upon triggered by the test controller 100, with an initialization of the test environment including setup and configurations of the SUT by the test setup agent 106 based on a test plan previously prepared at the test plan management module 108 by the tester or administrators.
  • the test plan associated with the UET for the SUT as provided by the test controller 100 (who receives the same from the test plan management module 108) will be provided and executed (see step 200).
  • the adaptive load generator 102 will establish a user traffic flow by simulating an anticipated system load of the SUT in accordance with the test plan thereof.
  • the adaptive network emulator agent 103 in step 202, will establish one or more emulated network connections by modifying at least three connection parameters including bandwidth, latency and packet loss of one or more of the plurality of network connections in accordance with the test plan thereof.
  • the analyzer and issue management module 104 will provide performance analysis to thereby compute a feedback score based on user feedback received thereof (see step 203). Thereafter, the feedback score will be compared against a score threshold to trigger execution of another UET, a functional test and a performance test on a selected part of the test plan for reporting the root cause analysis, as in step 204.
  • This step 204 is preferably executed by the analyzer and issue management module 104 (see "A").
  • the analyzer and issue management module 104 will check for any usability issue pertaining to the SUT. It can be done by way of comparing the feedback score with the score threshold. If there is no usability issue detected, then a UET or test report will be immediately generated. However, if there is a usability issue, the analyzer and issue management module 104 will trigger execution of another UET, a functional test and a performance test on a selected part of the test plan where the issues arise. The results obtained from the UET, the functional test and the performance test are consolidated and further analyzed for root cause analysis. The analyzer and issue management module 104 will immediately find the root cause of the usability or user experience issue(s) based on the results obtained from the three tests. Once the analyzer and issue management module 104 has found the root cause of the usability or user experience issue(s), then it will generate a problem report to the developer.
  • Step 205 indicates a step of monitoring a software release associated with the SUT thereby triggering download and execution of the same, which shall be executed in a continuous manner.
  • the software release is preferably generated once the problem report has been created by the analyzer and issue management module 104.
  • a version upgrade is performed to the SUT based on the input from the developer who fixed the issues (see step 207).
  • a software release associated with the SUT will be generated in step 208.
  • the software release will next be made available to the software release post- commit trigger module 105 or other modules with similar functions so that its presence will be apparent and noticed.
  • the software release post-commit trigger module 105 will subsequently detect the software release. It will thus trigger download of the software release at the test controller 100 and further trigger execution of the same at the test controller 100, whereby the test controller 100 will re-execute the test plan or executes a new UET on the upgraded SUT.
  • the terms “a” and “an,” as used herein, are defined as one or more than one.
  • the term “plurality,” as used herein, is defined as two or more than two.
  • the term “another,” as used herein, is defined as at least a second or more.
  • the terms “including” and/or “having,” as used herein, are defined as comprising (i.e., open language).

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The present invention relates to an improved system and method for managing a user experience test (UET) for a system under test (SUT) in a controlled test environment. The system comprises a test controller (100), a user interface agent (101), an adaptive load generator agent (102), an adaptive network emulator agent (103), an analyzer and issue management module (104), a software release post-commit trigger module (105), a test setup agent (106), a test report and test log module (107), a test plan management module (108) and a functional test execution agent (109). The method comprises providing a test plan for the SUT to users via network connections (200); establishing a user traffic flow by simulating system load (201); establishing one or more emulated network connections by modifying at least three connection parameters (202); providing performance analysis thereby computing a feedback score (203); comparing the feedback score against a score threshold to trigger execution of another UET, a functional test and a performance test for reporting root cause analysis (204); and monitoring a software thereby triggering download and execution of the same (205).

Description

SYSTEM FOR MANAGING USER EXPERIENCE TEST IN CONTROLLED TEST ENVIRONMENT AND METHOD THEREOF
FIELD OF THE INVENTION
The present invention relates generally to arrangement for user experience test. More particularly, the present invention relates to an improved system and method for managing a user experience test for a system under test in a controlled test environment.
BACKGROUND OF THE INVENTION
Usability testing or user experience test is an important aspect in user centered design approach that has become a major focus of research activities. It is usually conducted once the product has reached certain design and development stages and the respondents have been duly recruited. The user experience test provides some insights, understanding and recommendations about the user needs in relation to a product under testing, which will be useful for further improvements to the product later on. During the test, the respondents or groups of potential users are required to complete specified routine tasks while being monitored by a moderator or tester who will observe, listen and take notes of the same. This can be done either in a usability lab, remotely or on-site with portable equipment. The user experience test has extraordinarily benefited, in term of product review and efficiency, many technological fields digitally and physically such as website development, application or firmware for smartphone, etc.
U.S Publication No. 2004/0015867 A1 , for example, discloses an automated usability testing system and method. According to the '867 publication, the system comprises a test plan and a data logger creator for constructing a test plan and for collecting test data, respectively. The system also comprises a log analyzer for summarizing the data log in a report.
Although there are many systems and methods for user experience test in the prior art, unfortunately, some of the attempts have been found to possess disadvantages and to be unsatisfactory for use in the testing. One problem with the conventional system is that there is no integration of other tests that would be able to provide additional relevant inputs to improve the testing. The user experience test and various other tests are usually conducted separately, which do not depend on each other. In some instances, to be able to accurately correlate the results obtained from the user experience test and to reflect the actual operating environment as close as possible to the real environment, one has to import some relevant data derived from tests other than the user experience test as well as to manage or control the test environment of the user experience test efficiently. With this data and proper management in hand, the tester would have been in a better position to conclude the case.
Another problem is that the user experience test is typically conducted in a limited number of test environments due to the cost factor, which will lead to non-holistic review of a product. The more test environments, the more expensive it is. In most cases, the user experience test of a website, for instance, is conducted through one network connection only such as LAN wired connection, where the real environment that contains many different network connections will never be achieved. The different network connections in general have different characteristics.
Yet another problem is that the root causes of the product issues or failure remain unsolved and undetermined, although execution of the user experience test is deemed a success. It was found that the tester is unable to unlock solutions to the problems faced by the product during the testing, the reason being that the conventional system has no data adaptation that could help solve the mystery.
A need therefore exists for an improved system and method for managing the user experience test to overcome some of the problems and shortcomings or at least mitigate the disadvantages of the prior art.
SUM MARY OF THE INVENTION
The following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the invention. Its sole purpose is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later.
Accordingly, the present invention provides a system for managing a user experience test in a controlled test environment.
The system of the present invention can be characterized by a test controller configured for providing and executing a test plan associated with the user experience test for a system under test to a set of users via a plurality of network connections through a user interface agent; an adaptive load generator agent configured for establishing a user traffic flow by simulating an anticipated system load of the system under test in accordance with the test plan thereof, wherein the user traffic flow represents actions and types of virtual users; an adaptive network emulator agent configured for establishing one or more emulated network connections by modifying at least three connection parameters including bandwidth, latency and packet loss of one or more of the plurality of network connections in accordance with the test plan thereof; an analyzer and issue management module configured for providing performance analysis and computing a feedback score in relation to user feedback transmitted by the test controller, wherein the feedback score is compared against a score threshold to trigger execution of another user experience test, a functional test and a performance test on a selected part of the test plan for reporting root cause analysis; a software release post-commit trigger module in communication with the analyzer and issue management module configured for monitoring a software release associated with the system under test thereby triggering download and execution of the same at the test controller; and a test setup agent configured for initializing the controlled test environment and the system under test based on the test plan, the user traffic flow and the one or more emulated network connections thereof. Preferably, the system further comprises a test report and test log module configured for managing reports and logs generated during the user experience test; and a test plan management module configured for preparing the test plan of the user experience test including test tasks, test questions, test configurations, test environment settings, user guides, performance test scripts and functional test scripts. Preferably, the one or more of the plurality of network connections is selected based on Internet protocol (IP) address, network ports, media access control (MAC) address of source or destination, and network protocol. Preferably, the functional test relates to assessment of feature functions of the system under test.
Preferably, the performance test relates to assessment of performance of the system under test in the user traffic flow and the one or more emulated network connections.
Preferably, the root cause analysis is a comparison of results obtained from the user experience test, the functional test and the performance test. Preferably, the software release is an upgrade version of the system under test developed based on the root cause analysis.
Preferably, the analyzer and issue management module generates a problem report comprising the user experience test, the functional test, the performance test and results thereof, and the root cause analysis to a developer of the system under test for upgrading the system under test prior to generation of the software release.
In accordance with another aspect of the present invention, there is provided a method of managing a user experience test in a controlled test environment.
The method of the present invention can be characterized by providing and executing a test plan associated with the user experience test for a system under test to a set of users via a plurality of network connections; establishing a user traffic flow by simulating an anticipated system load of the system under test in accordance with the test plan; establishing one or more emulated network connections by modifying at least three connection parameters including bandwidth, latency and packet loss of one or more of the plurality of network connections in accordance with the test plan; providing performance analysis thereby computing a feedback score based on user feedback received thereof; comparing the feedback score against a score threshold to trigger execution of another user experience test, a functional test and a performance test on a selected part of the test plan for reporting root cause analysis; and monitoring a software release associated with the system under test thereby triggering download and execution of the same. Preferably, the method further comprises obtaining a problem report comprising the user experience test, the functional test, the performance test and results thereof, and the root cause analysis; performing a version upgrade, based on the problem report, to the system under test; and generating a software release associated with the system under test.
It is therefore an advantage of the present invention that provides integration and connectivity with other suitable tests such as functional and performance tests which are closely related to the user experience test so that the tester would be able to draw a conclusion pertaining to the system under test accurately and quickly. Furthermore, the present invention advantageously provides root cause analysis based on a series of tests performed thereof to allow immediate identification and solving of the usability, functional and performance issues raised to a system under test. It is therefore another advantage of the present invention that closely resembles the real environment in which the product will be used in term of user traffic flow and network connections. The present invention allows for creation of the test environments of the user experience testing indefinitely. It can be achieved by way of manipulating the user traffic flow in respect of actions and types of virtual users and the network connection in respect of its connection parameters.
The foregoing and other objects, features, aspects and advantages of the present invention will become better understood from a careful reading of a detailed description provided herein below with appropriate reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
A more complete appreciation of the invention and many of the attendant advantages thereof will be readily as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
Figure 1 is a schematic block diagram showing a system for managing a user experience test according to one embodiment of the present invention;
Figure 2 is a schematic block diagram showing a test controller and other modules associated with the system for managing a user experience test according to one embodiment of the present invention;
Figure 3 is a flow diagram depicting a method of managing a user experience test according to one embodiment of the present invention; and
Figure 4 is a flow diagram depicting the step of comparing a feedback score against a score threshold of the method of managing a user experience test according to one embodiment of the present invention.
It is noted that the drawings may not be to scale. The drawings are intended to depict only typical aspects of the invention, and therefore should not be considered as limiting the scope of the invention. In the drawings, like numberings represent like elements between the drawings.
DETAILED DESCRIPTION OF THE INVENTION It is an object of the present invention to provide an improved system and method for managing an unmoderated remote user experience test (UET) within a controlled test environment that provides integration and connectivity with functional and performance tests. It is further an object of the present invention to closely resemble the real test environment for UET in term of user traffic flow and network connections. It is also an object of the present invention to allow indefinite creation of the test environments of UET. The present invention is developed in a highly compact, cost-effective, and simple manner, without the use of complicated and sophisticated components. According to one preferred embodiment of the present invention, as depicted in Figure 1 and 2, the system comprises a plurality of modules and agents or engines suitably adopted and interconnected with each other to deliver the desired effects and objectives thereof. The system comprises a test controller 100, a user interface agent 101 , an adaptive load generator agent 102, an adaptive network emulator agent 103, an analyzer and issue management module 104, a software release post-commit trigger module 105, a test setup agent 106, a test report and test log module 107, a test plan management module 108 and a functional test execution agent 109.
The test controller 100 is the main test execution module for the system of the present invention. It is connected to every other modules and agents employed by the system. The test controller 100, on a first part, is preferably connected to the user interface agent 101 , the adaptive load generator agent 102, the adaptive network emulator agent 103, the test setup agent 106 and the functional test execution agent 109, which further allows indirect connection of the test controller 100 to a set of users and system under test (SUT), as assembled in Figure 1 . On a second part, the test controller 100 is connected to the analyzer and issue management module 104, the software release post-commit trigger module 105, the test report and test log module 107 and the test plan management module 108 besides the user interface agent 101. The arrangement of the modules and agents is best depicted by Figure 2. This second part also permits connections between a developer of the SUT and a storage or database with the test controller 100. The user experience test is essentially executed to the users or respondents by a tester for a particular product or SUT according to a test plan. The test plan is preferably provided and executed by the test controller 100. Particularly, the test plan is prepared by the test plan management module 108 by the tester or administrators and passed on to the test controller 100 for further action to the users. In this regard, the test plan comprises test tasks, test questions, test configurations, test environment settings, user guides, performance test scripts and functional test scripts. The test questions may be provided in the form of scale questions, multiple-choice questions and open questions. The test controller 100 may execute the test plan associated with the UET loaded from the test plan management module 108 via a plurality of network connections through the user interface agent 101.
Besides that, the test controller 100, as its name implies, is further configured to monitor and control all the modules and agents in the system of the present invention. For instance, the test controller 100 monitors and controls the test setup agent 106 to setup the SUT, the analyzer and issue management module 104 to analyze user feedback and provide performance analysis, the adaptive network emulator agent 103 to start network emulation, the adaptive load generator agent 102 to generate the desired user traffic flows for the testing, and the user interface agent 101 to give the test tasks to the users in question. The test controller 100 receives user feedback from the users through the user interface agent 101 . In the context of the present invention, the test task and the user feedback may be in the form of graphical user interface (GUI) inputs, text, photo, video and many more. All the user feedback will be stored and saved in the storage provided thereof. The user interface agent 101 may comprise a user interface that will be used as communication means between the system particularly the test controller 100 and the users. The adaptive load generator agent 102 can be adopted to create and establish a user traffic flow for the UET. In this respect, the adaptive load generator agent 102 will simulate an anticipated system load of the SUT, as specified by the test plan thereof. Preferably, the user traffic flow represents actions and types of virtual users that shall be applied as a system load during the UET to resemble the real environment of the SUT. The user traffic flow generated thereof will simulate the virtual users performing different actions on the SUT; the reason being to simulate actual usage of the SUT in the real environment.
There are different user types and user actions in the system of the present invention. Examples of user types can include, but not limited to, project managers, developers, test engineers, process engineers, manufacturing team and marketing team, who are virtually exist in the system. Examples of user actions can include, but not limited to, project managers are generating reports from the SUT, developers are updating development progress in the SUT, and test engineers are uploading test results to the SUT. In the test plan, the tester or administrators can specify the preferred system load to be generated and used on the SUT during the testing. One of the purposes is to allow the system of the present invention to be able to simulate the anticipated system load in the real environment. For example, the tester can simulate five project managers who are generating reports from the SUT, 100 developers who are updating development progress in the SUT, and 50 test engineers who are uploading test results to the SUT. The tester will then upload the associated automation test scripts of user actions and user types to the test plan management module 108 to be compiled with the rest of information in the test plan. Based on the test plan, the adaptive load generator agent 102, under controlled by the test controller 100, simulates the user traffic flow accordingly. At this instance, when the users enter into the SUT, the system load of the SUT will be the same as the real environment and project the actual user experience as in the real environment. Thus, the test results would be more accurate as response time of the SUT is similar to the response time of that in the real environment.
The adaptive load generator agent 102 may be configured to simulate many other anticipated system load, for example, during special condition, year-end shopping season, school holiday season, stock clearance sale, etc. The tester can also execute or run the UET with a series of different system load to check the changes in the user experience scoring when the system load and response time changes. The result from this exercise can be used as a reference for the administrator to upgrade the SUT when, for example, the response time drops below a certain threshold, which reflect unacceptable bad user experience.
The adaptive load generator agent 102 is an adaptive module which is capable to automatically change the number of simulated users based on the number of real users participate in the UET. For example, say the administrator would like to conduct an UET with five users doing registration and 20 users searching for products on a website. If there are one real user who is doing registration process and five real users who are searching for products on a website, the adaptive load generator agent 102 will adaptively adjust the system to simulate four virtual users doing registration and 15 virtual users searching for products. Alternatively, the adaptive load generator agent 102 can be used to simulate a fixed number of user traffic types regardless of the number of real users participated in the UET, which is implementable in the test plan. The adaptive load generator agent 102 is also configured to run performance test on the SUT, the results of which will be used by the analyzer and issue management module 104 to find the root cause of the user experience related issues.
The adaptive network emulator agent 103 can be configured to create and establish one or more emulated network connections for the UET. It is a network emulator module which is capable of changing the network connection parameters. The emulated network connections can be achieved by way of modifying at least three connection parameters of one or more selected network connections of the plurality of network connections in line with test plan thereof. Preferably, the at least three connection parameters include bandwidth, latency and packet loss. Nevertheless, other connection parameters may also be used, such that the modifications and variations do not depart from the present invention. The tester or administrator would be able to specify the preferred types of network connection to be emulated for the UET. Examples of network connections as well as emulated network connections can include, but not limited to, cellular 3G network connection, cellular long term evolution (LTE) or 4G network connection and wireless local area network of 802.1 1 family network connections.
The adaptive network emulator agent 103 will act as a layer-2 network switch that is totally transparent to the network traffics. When network traffics of the network connections pass through the adaptive network emulator agent 103, the adaptive network emulator agent 103 will change the connection parameters of selected network connections as specified in the test plan. It is imperative to note that not all network connections that passed through the adaptive network emulator agent 103 will have their connection parameters changed. The adaptive network emulator agent 103 will only perform network emulation to the selected network connections as instructed by the test controller 100 through its test plan. That is to say, other non-selected network connections will not be affected and will pass through the adaptive network emulator agent 103 without any changes in their connection parameters. The network connections will be selected based on certain requirements or criteria. Examples of requirements or criteria can include source or destination Internet protocol (I P) address, source or destination network ports, source or destination media access control (MAC) address and network protocol used thereof. At this instance, when the users enter into the SUT, the SUT will be the same as the real environment as it projects the actual user experience in terms of network connection as in the real environment. According to one example, in the original network connection, a real user is connected to the SUT at 100 Mbps, 0.01 ms latency and 0.001 % packet loss. Based on an instruction embedded in the test plan as received from the test controller 100, when the network traffics pass through the adaptive network emulator agent 103, the adaptive network emulator agent 103 will change the original network connection to 1.5 Mbps, 120 ms and 1.5% packet loss to emulate a cellular 3G data connection, i.e. an emulated network connection. In this example, during the UET, the real users will be connected to the SUT via the emulated network connections. The response time of the application will be affected because in the emulated network connection, the bandwidth is smaller, latency is higher and packet loss is higher as opposed to the original network connection.
The analyzer and issue management module 104 can be configured for providing performance analysis and for computing a feedback score in relation to the user feedback transmitted from the test controller 100. The user feedback received from the users is preferably analyzed appropriately according to the test plan. The user feedback which is in response to the test questions thereof will be gathered and computed based on the types of test questions the test controller 100 have put in to the users. The scores for each of the test questions designated by the users will be summed up to generate a feedback score. The feedback score is then compared against a score threshold set by the tester or administrator. The feedback score may be normalized to a common scale such as an average value, which is also may be compared against a corresponding score threshold. Also, an individual score of each of the test questions may be compared to an individual score threshold, at this stage. Based on the comparison between the feedback score and the score threshold, the analyzer and issue management module 104 will trigger execution of another UET, a functional test and a performance test on a selected part of the test plan where the issues arise. The functional test preferably relates to assessment of feature functions of the SUT, while the performance test is related to assessment of performance of the SUT in the user traffic flow and the emulated network connections. The functional automation test script and the performance automation test script are uploaded by the tester in the test plan at the system initialization stage. The functional test is preferably run by the functional test execution agent 109. The analyzer and issue management module 104 will decide whether to execute the other stipulated tests or not based on the comparison result. The results obtained from the UET, the functional test and the performance test are consolidated and further analyzed for root cause analysis.
The analyzer and issue management module 104 will immediately find the root cause of the usability or user experience issue(s) based on the results obtained from the three tests. The root cause analysis is a comparison of results obtained from the UET, the functional test and the performance test thereof. Examples of analysis done by the analyzer and issue management module 104 can include but not limited to: i. If the users are not satisfied with a feature function of the SUT, but the functional test is passed, then it can be concluded that the issue is not caused by the feature functions; ii. If the users are satisfied with the feature function of the SUT, but not satisfied with the feature function when the SUT is executed at high system load of user traffic flow, then it can be concluded that the system performance is poor at that user traffic flow and the issue is caused by its performance; iii. If the users are not satisfied with the feature function regardless whether the system load is high or low, then it can be concluded that the issue is not caused by its performance, but it is caused by the system design such as workflow, layout, colors, etc. instead. The tester can obtain more detail about this system design issue from the users by using different types of test question such as open questions.
Once the analyzer and issue management module 104 has found the root cause of the usability or user experience issue(s), then it will generate a problem report to the developer. The problem report preferably comprises the UET, the functional test, the performance test and the results thereof, the user feedback along with the root cause analysis for the SUT. The report can be generated by a bug tracking software like JI RA, Bugzilla, etc. The developer, upon receiving the problem report, will next decide for upgrading or modifying the SUT in order to solve the issues. The root cause analysis will facilitate the developer in identifying or locating the parts of the SUT that need to be fixed.
For example, say the test questions are scale questions where the users response by way of selecting a score from a scale of 1 to 5, e.g. scale 1 is "very bad" and 5 is "very good", and a score threshold is 3. For the purpose of illustration, consider that there are five users who participated in the UET and hence, there are five individual scores as the user feedback. The analyzer and issue management module 104 will next calculate an average score of all the scores from the five users and compare the average score against the score threshold 3. If the average score falls below the score threshold, then the analyzer and issue management module 104 will trigger the test controller 100 to execute the functional test, the performance test and the UET on that particular function for the SUT. Thereafter, the results from the tests will be gathered and further scrutinized for the root cause analysis. Finally, the analyzer and issue management module 104 will generate a problem report and forward the same to the developer.
Once the usability or user experience issues are resolved by the developer, a new software release which provides an upgrade or enhance version of the SUT will be uploaded into a centralized repository. The software release generally refers to the distribution of a new or an upgraded version of computer software such as the SUT. Such the action will therefore trigger a signal transmission to the software release post-commit trigger module 105. Preferably, the signal is generated by the "post- commit hook" in the centralized repository. The post-commit hook is run after the upgrading is committed and a new version is created. The software release post-commit trigger module 105 which is in communication with the analyzer and issue management module 104 can be configured to monitor any issuance and distribution of a software release associated with the SUT from the analyzer and issue management module 104, particularly. The software release post-commit trigger module 105 will subsequently trigger download of the software release at the test controller 100. Further, it also will trigger execution of the same at the test controller 100, whereby the test controller 100 will re-executes the test plan or executes a new UET on the upgraded SUT. Once the re-execution or new execution of UET is completed, the analyzer and issue management module 104 will perform its portion by way of providing analysis, comparing the feedback score with the score threshold, and eventually issuing a problem report for the SUT. The iteration cycle of the above-mentioned steps will continue accordingly.
Initialization of the controlled test environment for the UET including setup and configurations is preferably taken care of by the test setup agent 106. It is essentially capable of setting the operating environment that the SUT should be tested during the UET, in accordance with the test plan, the user traffic flow and the emulated network connections thereof. The setup and configuration of the operating environment, i.e. the controlled test environment can include, but not limited to, type of hypervisor and virtual machines, size of random access memory (RAM), type and number of central processing unit (CPU) cores, size and type of hard disk, public cloud vendor, location of the public cloud virtual machine, type of operating system, type of server and type of application runtime. The test setup agent 106 is basically run based on the instructions from the test controller 100. For instance, the test setup agent 106 will install and setup a new software release of the SUT released by the developer which contains bug fixes upon an instruction from the test controller 100.
The test report and test log module 107 can be configured to manage the reports and logs for the SUT generated during the UET. The test logs and test report will be stored in the persistent storage thereof.
The other aspect of the present invention which relates to a method of managing the UET has been demonstrated in the preceding paragraphs. Figures 3 and 4 describe, in a step-by-step manner, the method according to one preferred embodiment of the present invention.
With reference to Figure 3, the method begins, upon triggered by the test controller 100, with an initialization of the test environment including setup and configurations of the SUT by the test setup agent 106 based on a test plan previously prepared at the test plan management module 108 by the tester or administrators. The test plan associated with the UET for the SUT as provided by the test controller 100 (who receives the same from the test plan management module 108) will be provided and executed (see step 200). In step 201 , the adaptive load generator 102 will establish a user traffic flow by simulating an anticipated system load of the SUT in accordance with the test plan thereof. The adaptive network emulator agent 103, in step 202, will establish one or more emulated network connections by modifying at least three connection parameters including bandwidth, latency and packet loss of one or more of the plurality of network connections in accordance with the test plan thereof. Following that, the analyzer and issue management module 104 will provide performance analysis to thereby compute a feedback score based on user feedback received thereof (see step 203). Thereafter, the feedback score will be compared against a score threshold to trigger execution of another UET, a functional test and a performance test on a selected part of the test plan for reporting the root cause analysis, as in step 204. This step 204 is preferably executed by the analyzer and issue management module 104 (see "A").
At step 204, with reference to Figure 4, the analyzer and issue management module 104 will check for any usability issue pertaining to the SUT. It can be done by way of comparing the feedback score with the score threshold. If there is no usability issue detected, then a UET or test report will be immediately generated. However, if there is a usability issue, the analyzer and issue management module 104 will trigger execution of another UET, a functional test and a performance test on a selected part of the test plan where the issues arise. The results obtained from the UET, the functional test and the performance test are consolidated and further analyzed for root cause analysis. The analyzer and issue management module 104 will immediately find the root cause of the usability or user experience issue(s) based on the results obtained from the three tests. Once the analyzer and issue management module 104 has found the root cause of the usability or user experience issue(s), then it will generate a problem report to the developer.
If there is another usability issue, the analyzer and issue management module 104 will resolve the issue and generate the problem report. The process will be repeated until all the usability issues including old and new issues are resolved. At a final stage of this step, a UET or test report will be generated. Step 205 indicates a step of monitoring a software release associated with the SUT thereby triggering download and execution of the same, which shall be executed in a continuous manner. The software release is preferably generated once the problem report has been created by the analyzer and issue management module 104. As shown by a protruding section of Figure 3 in relation to the software release, upon obtaining the problem report from the analyzer and issue management module 104 in step 206, a version upgrade is performed to the SUT based on the input from the developer who fixed the issues (see step 207). Subsequently, a software release associated with the SUT will be generated in step 208. The software release will next be made available to the software release post- commit trigger module 105 or other modules with similar functions so that its presence will be apparent and noticed.
At this point, upon monitoring, the software release post-commit trigger module 105 will subsequently detect the software release. It will thus trigger download of the software release at the test controller 100 and further trigger execution of the same at the test controller 100, whereby the test controller 100 will re-execute the test plan or executes a new UET on the upgraded SUT. The terms "a" and "an," as used herein, are defined as one or more than one. The term "plurality," as used herein, is defined as two or more than two. The term "another," as used herein, is defined as at least a second or more. The terms "including" and/or "having," as used herein, are defined as comprising (i.e., open language).
While this invention has been particularly shown and described with reference to the exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention as defined by the appended claims.

Claims

A system for managing a user experience test in a controlled test environment, characterized in that, the system comprising:
a test controller (100) configured for providing and executing a test plan associated with the user experience test for a system under test to a set of users via a plurality of network connections through a user interface agent (101);
an adaptive load generator agent (102) configured for establishing a user traffic flow by simulating an anticipated system load of the system under test in accordance with the test plan thereof, wherein the user traffic flow represents actions and types of virtual users;
an adaptive network emulator agent (103) configured for establishing one or more emulated network connections by modifying at least three connection parameters including bandwidth, latency and packet loss of one or more of the plurality of network connections in accordance with the test plan thereof;
an analyzer and issue management module (104) configured for providing performance analysis and computing a feedback score in relation to user feedback transmitted by the test controller (100), wherein the feedback score is compared against a score threshold to trigger execution of another user experience test, a functional test and a performance test on a selected part of the test plan for reporting root cause analysis;
a software release post-commit trigger module (105) in communication with the analyzer and issue management module (104) configured for monitoring a software release associated with the system under test thereby triggering download and execution of the same at the test controller (100); and
a test setup agent (106) configured for initializing the controlled test environment and the system under test based on the test plan, the user traffic flow and the one or more emulated network connections thereof.
The system according to Claim 1 further comprising:
a test report and test log module (107) configured for managing reports and logs generated during the user experience test; and
a test plan management module (108) configured for preparing the test plan of the user experience test including test tasks, test questions, test configurations, test environment settings, user guides, performance test scripts and functional test scripts.
3. The system according to Claim 1 , wherein the one or more of the plurality of network connections is selected based on Internet protocol (IP) address, network ports, media access control (MAC) address of source or destination, and network protocol.
4. The system according to Claim 1 , wherein the functional test relates to assessment of feature functions of the system under test.
5. The system according to Claim 1 , wherein the performance test relates to assessment of performance of the system under test in the user traffic flow and the one or more emulated network connections.
6. The system according to Claim 1 , wherein the root cause analysis is a comparison of results obtained from the user experience test, the functional test and the performance test.
7. The system according to Claim 1 , wherein the software release is an upgrade version of the system under test developed based on the root cause analysis.
8. The system according to Claim 1 , wherein the analyzer and issue management module (104) generates a problem report comprising the user experience test, the functional test, the performance test and results thereof, and the root cause analysis to a developer of the system under test for upgrading the system under test prior to generation of the software release.
9. A method of managing a user experience test in a controlled test environment, characterized in that, the method comprising:
providing and executing a test plan associated with the user experience test for a system under test to a set of users via a plurality of network connections (200);
establishing a user traffic flow by simulating an anticipated system load of the system under test in accordance with the test plan (201); establishing one or more emulated network connections by modifying at least three connection parameters including bandwidth, latency and packet loss of one or more of the plurality of network connections in accordance with the test plan (202);
providing performance analysis thereby computing a feedback score based on user feedback received thereof (203);
comparing the feedback score against a score threshold to trigger execution of another user experience test, a functional test and a performance test on a selected part of the test plan for reporting root cause analysis (204); and
monitoring a software release associated with the system under test thereby triggering download and execution of the same (205).
10. The method according to Claim 9 further comprising:
obtaining a problem report comprising the user experience test, the functional test, the performance test and results thereof, and the root cause analysis (206);
performing a version upgrade, based on the problem report, to the system under test (207); and
generating a software release associated with the system under test (208).
PCT/MY2016/050085 2016-02-17 2016-11-30 System for managing user experience test in controlled test environment and method thereof WO2017142393A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
MYPI2016000283 2016-02-17
MYPI2016000283 2016-02-17

Publications (1)

Publication Number Publication Date
WO2017142393A1 true WO2017142393A1 (en) 2017-08-24

Family

ID=59625336

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/MY2016/050085 WO2017142393A1 (en) 2016-02-17 2016-11-30 System for managing user experience test in controlled test environment and method thereof

Country Status (1)

Country Link
WO (1) WO2017142393A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107894936A (en) * 2017-11-13 2018-04-10 重庆首亨软件有限公司 A kind of single-chip microcomputer test software
CN113029329A (en) * 2020-09-15 2021-06-25 山东华科信息技术有限公司 Detection system and detection method for batch detection of vibration sensors
US11520686B2 (en) 2021-01-26 2022-12-06 The Toronto-Dominion Bank System and method for facilitating performance testing
CN115437351A (en) * 2022-09-06 2022-12-06 中国第一汽车股份有限公司 Automated test system, automated test method, electronic device, and storage medium
WO2023202037A1 (en) * 2022-04-20 2023-10-26 中兴通讯股份有限公司 Service test method and apparatus, and related device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040015867A1 (en) * 2002-07-16 2004-01-22 Macko John Steven Travis Automated usability testing system and method
US20120311128A1 (en) * 2011-05-31 2012-12-06 Pechanec Jiri Performance testing in a cloud environment
US20130159774A1 (en) * 2011-12-19 2013-06-20 Siemens Corporation Dynamic reprioritization of test cases during test execution
WO2014088398A1 (en) * 2012-12-06 2014-06-12 Mimos Berhad Automated test environment deployment with metric recommender for performance testing on iaas cloud
US20150363304A1 (en) * 2014-06-17 2015-12-17 Kishan Nagamalla Self-learning and self-validating declarative testing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040015867A1 (en) * 2002-07-16 2004-01-22 Macko John Steven Travis Automated usability testing system and method
US20120311128A1 (en) * 2011-05-31 2012-12-06 Pechanec Jiri Performance testing in a cloud environment
US20130159774A1 (en) * 2011-12-19 2013-06-20 Siemens Corporation Dynamic reprioritization of test cases during test execution
WO2014088398A1 (en) * 2012-12-06 2014-06-12 Mimos Berhad Automated test environment deployment with metric recommender for performance testing on iaas cloud
US20150363304A1 (en) * 2014-06-17 2015-12-17 Kishan Nagamalla Self-learning and self-validating declarative testing

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107894936A (en) * 2017-11-13 2018-04-10 重庆首亨软件有限公司 A kind of single-chip microcomputer test software
CN113029329A (en) * 2020-09-15 2021-06-25 山东华科信息技术有限公司 Detection system and detection method for batch detection of vibration sensors
CN113218432A (en) * 2020-09-15 2021-08-06 山东华科信息技术有限公司 Detection system and detection method for batch detection of temperature and humidity sensors
US11520686B2 (en) 2021-01-26 2022-12-06 The Toronto-Dominion Bank System and method for facilitating performance testing
US11681607B2 (en) 2021-01-26 2023-06-20 The Toronto-Dominion Bank System and method for facilitating performance testing
WO2023202037A1 (en) * 2022-04-20 2023-10-26 中兴通讯股份有限公司 Service test method and apparatus, and related device
CN115437351A (en) * 2022-09-06 2022-12-06 中国第一汽车股份有限公司 Automated test system, automated test method, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
WO2017142393A1 (en) System for managing user experience test in controlled test environment and method thereof
WO2020140820A1 (en) Software testing method, system, apparatus, device, medium, and computer program product
Polo et al. Test automation
US8731896B2 (en) Virtual testbed for system verification test
US9465718B2 (en) Filter generation for load testing managed environments
CN107463362A (en) The method and system of lasting deployment based on multiple Jenkins
US11086752B2 (en) Methods, systems, and computer readable media for vendor-neutral testing and scoring of systems under test
US9823999B2 (en) Program lifecycle testing
US20190361801A1 (en) Method and system for cloud-based automated software testing
Lei et al. Performance and scalability testing strategy based on kubemark
Rodrigues et al. Pletsperf-a model-based performance testing tool
Jiménez et al. An architectural framework for quality-driven adaptive continuous experimentation
JP6959624B2 (en) Security assessment system
Harsh et al. Cloud enablers for testing large-scale distributed applications
US10795805B2 (en) Performance engineering platform and metric management
Berłowski et al. Highly automated agile testing process: An industrial case study
Kılınç et al. Cloud-based test tools: A brief comparative view
Tamilarasi et al. Research and Development on Software Testing Techniques and Tools
Galpaya Stress Testing Tool to check the performance of a Moodle Instance
Damm Evaluating and Improving Test Efficiency
Lazić et al. A simultaneous application of combinatorial testing and virtualization as a method for software testing
Ignatious et al. Identifying A Regression Test Prioritization Technique and Proposing A Tool for Automation for Trade me Website
Bhamangol A Comparative Analysis Study Of Software Testing Tools With Quality Factors Affects In Software Development Cycle
Costa LoadSun-proposal of a tool to generate workloads on web applications
de Gooijer Performance modeling of ASP. Net web service applications: an industrial case study

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16890788

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16890788

Country of ref document: EP

Kind code of ref document: A1